Test Report: Docker_Linux_crio_arm64 18925

                    
                      9bd6871c0608907332c6bb982838c8ee113ad42f:2024-05-20:34544
                    
                

Test fail (3/327)

Order failed test Duration
30 TestAddons/parallel/Ingress 168.68
32 TestAddons/parallel/MetricsServer 311.21
301 TestStartStop/group/old-k8s-version/serial/SecondStart 380.68
x
+
TestAddons/parallel/Ingress (168.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-091599 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-091599 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-091599 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [afab1c5e-485e-4d46-a663-ac48f3874d94] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [afab1c5e-485e-4d46-a663-ac48f3874d94] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00302552s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-091599 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.83253283s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-091599 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.066463868s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-091599 addons disable ingress-dns --alsologtostderr -v=1: (1.287294483s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-091599 addons disable ingress --alsologtostderr -v=1: (7.737851881s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-091599
helpers_test.go:235: (dbg) docker inspect addons-091599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8",
	        "Created": "2024-05-20T10:25:32.96313144Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1470184,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-20T10:25:33.277817654Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:56620e18f2c2c9a0448fc43c42f840334bd2baea497ff8deae66477dd0dbfecf",
	        "ResolvConfPath": "/var/lib/docker/containers/44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8/hostname",
	        "HostsPath": "/var/lib/docker/containers/44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8/hosts",
	        "LogPath": "/var/lib/docker/containers/44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8/44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8-json.log",
	        "Name": "/addons-091599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-091599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-091599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a50bffb6808659c6b2a8ed19423e5bfdb46fd7d7add6c832d59069960daad04b-init/diff:/var/lib/docker/overlay2/85c5c7809a5d893ae54ed3fa4fb6194b99d9d246c69ccb3f2daa2ee41dec0e23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a50bffb6808659c6b2a8ed19423e5bfdb46fd7d7add6c832d59069960daad04b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a50bffb6808659c6b2a8ed19423e5bfdb46fd7d7add6c832d59069960daad04b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a50bffb6808659c6b2a8ed19423e5bfdb46fd7d7add6c832d59069960daad04b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-091599",
	                "Source": "/var/lib/docker/volumes/addons-091599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-091599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-091599",
	                "name.minikube.sigs.k8s.io": "addons-091599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d4bea263c992ec87ac15433b26f3304b0c191d98c61cbfc85046de9f7a426f9d",
	            "SandboxKey": "/var/run/docker/netns/d4bea263c992",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40497"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40493"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40495"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40494"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-091599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "7347e77336db8d8ba56e5c364b3264e0485726a8e13495b5a03984bece7ecde7",
	                    "EndpointID": "32c212faf5763b907738236b5e43f219adfcce86d98737d396908605bddb542e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-091599",
	                        "44b1f0a47fff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-091599 -n addons-091599
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-091599 logs -n 25: (1.50207601s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| delete  | -p download-only-692242                                                                     | download-only-692242   | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| delete  | -p download-only-801226                                                                     | download-only-801226   | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| delete  | -p download-only-692242                                                                     | download-only-692242   | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| start   | --download-only -p                                                                          | download-docker-161399 | jenkins | v1.33.1 | 20 May 24 10:25 UTC |                     |
	|         | download-docker-161399                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-161399                                                                   | download-docker-161399 | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-390288   | jenkins | v1.33.1 | 20 May 24 10:25 UTC |                     |
	|         | binary-mirror-390288                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33461                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-390288                                                                     | binary-mirror-390288   | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| addons  | enable dashboard -p                                                                         | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:25 UTC |                     |
	|         | addons-091599                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:25 UTC |                     |
	|         | addons-091599                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-091599 --wait=true                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	|         | -p addons-091599                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-091599 ip                                                                            | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	| addons  | addons-091599 addons disable                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	|         | -p addons-091599                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-091599 ssh cat                                                                       | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	|         | /opt/local-path-provisioner/pvc-2b457869-27d5-410a-999e-eb21b51d4e81_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-091599 addons disable                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	|         | addons-091599                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:30 UTC | 20 May 24 10:30 UTC |
	|         | addons-091599                                                                               |                        |         |         |                     |                     |
	| addons  | addons-091599 addons                                                                        | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:30 UTC | 20 May 24 10:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-091599 addons                                                                        | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:30 UTC | 20 May 24 10:30 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-091599 ssh curl -s                                                                   | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:30 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-091599 ip                                                                            | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:32 UTC | 20 May 24 10:32 UTC |
	| addons  | addons-091599 addons disable                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:33 UTC | 20 May 24 10:33 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-091599 addons disable                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:33 UTC | 20 May 24 10:33 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:25:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:25:09.080403 1469715 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:25:09.080573 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:25:09.080601 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:25:09.080620 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:25:09.080903 1469715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 10:25:09.081437 1469715 out.go:298] Setting JSON to false
	I0520 10:25:09.082395 1469715 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":151656,"bootTime":1716049053,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0520 10:25:09.082463 1469715 start.go:139] virtualization:  
	I0520 10:25:09.084930 1469715 out.go:177] * [addons-091599] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 10:25:09.087106 1469715 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:25:09.087298 1469715 notify.go:220] Checking for updates...
	I0520 10:25:09.088623 1469715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:25:09.090592 1469715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 10:25:09.092409 1469715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	I0520 10:25:09.094194 1469715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 10:25:09.095893 1469715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:25:09.097633 1469715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:25:09.118348 1469715 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 10:25:09.118508 1469715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:25:09.180133 1469715 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-05-20 10:25:09.170853173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:25:09.180238 1469715 docker.go:295] overlay module found
	I0520 10:25:09.182303 1469715 out.go:177] * Using the docker driver based on user configuration
	I0520 10:25:09.183888 1469715 start.go:297] selected driver: docker
	I0520 10:25:09.183909 1469715 start.go:901] validating driver "docker" against <nil>
	I0520 10:25:09.183924 1469715 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:25:09.184589 1469715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:25:09.235696 1469715 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-05-20 10:25:09.226639359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:25:09.235871 1469715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:25:09.236130 1469715 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:25:09.237891 1469715 out.go:177] * Using Docker driver with root privileges
	I0520 10:25:09.239577 1469715 cni.go:84] Creating CNI manager for ""
	I0520 10:25:09.239609 1469715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 10:25:09.239629 1469715 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 10:25:09.239709 1469715 start.go:340] cluster config:
	{Name:addons-091599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-091599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:25:09.241843 1469715 out.go:177] * Starting "addons-091599" primary control-plane node in "addons-091599" cluster
	I0520 10:25:09.243460 1469715 cache.go:121] Beginning downloading kic base image for docker with crio
	I0520 10:25:09.245143 1469715 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0520 10:25:09.246707 1469715 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:25:09.246747 1469715 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0520 10:25:09.246765 1469715 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4
	I0520 10:25:09.246774 1469715 cache.go:56] Caching tarball of preloaded images
	I0520 10:25:09.246858 1469715 preload.go:173] Found /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0520 10:25:09.246868 1469715 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:25:09.247250 1469715 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/config.json ...
	I0520 10:25:09.247283 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/config.json: {Name:mk3ac92895713af11e0d1505d2a19f0e41cd4c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:09.261028 1469715 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0520 10:25:09.261152 1469715 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory
	I0520 10:25:09.261172 1469715 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory, skipping pull
	I0520 10:25:09.261179 1469715 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in cache, skipping pull
	I0520 10:25:09.261186 1469715 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a as a tarball
	I0520 10:25:09.261191 1469715 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a from local cache
	I0520 10:25:26.039653 1469715 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a from cached tarball
	I0520 10:25:26.039694 1469715 cache.go:194] Successfully downloaded all kic artifacts
	I0520 10:25:26.039755 1469715 start.go:360] acquireMachinesLock for addons-091599: {Name:mk13a7ebbe82875043afa1a044664bb821768911 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:25:26.039900 1469715 start.go:364] duration metric: took 122.033µs to acquireMachinesLock for "addons-091599"
	I0520 10:25:26.039934 1469715 start.go:93] Provisioning new machine with config: &{Name:addons-091599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-091599 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:25:26.040027 1469715 start.go:125] createHost starting for "" (driver="docker")
	I0520 10:25:26.042409 1469715 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0520 10:25:26.042670 1469715 start.go:159] libmachine.API.Create for "addons-091599" (driver="docker")
	I0520 10:25:26.042706 1469715 client.go:168] LocalClient.Create starting
	I0520 10:25:26.042820 1469715 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem
	I0520 10:25:26.180627 1469715 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem
	I0520 10:25:26.645639 1469715 cli_runner.go:164] Run: docker network inspect addons-091599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0520 10:25:26.660816 1469715 cli_runner.go:211] docker network inspect addons-091599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0520 10:25:26.660930 1469715 network_create.go:281] running [docker network inspect addons-091599] to gather additional debugging logs...
	I0520 10:25:26.660955 1469715 cli_runner.go:164] Run: docker network inspect addons-091599
	W0520 10:25:26.676759 1469715 cli_runner.go:211] docker network inspect addons-091599 returned with exit code 1
	I0520 10:25:26.676797 1469715 network_create.go:284] error running [docker network inspect addons-091599]: docker network inspect addons-091599: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-091599 not found
	I0520 10:25:26.676831 1469715 network_create.go:286] output of [docker network inspect addons-091599]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-091599 not found
	
	** /stderr **
	I0520 10:25:26.676953 1469715 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0520 10:25:26.691979 1469715 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001753e90}
	I0520 10:25:26.692024 1469715 network_create.go:124] attempt to create docker network addons-091599 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0520 10:25:26.692124 1469715 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-091599 addons-091599
	I0520 10:25:26.759341 1469715 network_create.go:108] docker network addons-091599 192.168.49.0/24 created
	I0520 10:25:26.759376 1469715 kic.go:121] calculated static IP "192.168.49.2" for the "addons-091599" container
	I0520 10:25:26.759450 1469715 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0520 10:25:26.773195 1469715 cli_runner.go:164] Run: docker volume create addons-091599 --label name.minikube.sigs.k8s.io=addons-091599 --label created_by.minikube.sigs.k8s.io=true
	I0520 10:25:26.789210 1469715 oci.go:103] Successfully created a docker volume addons-091599
	I0520 10:25:26.789319 1469715 cli_runner.go:164] Run: docker run --rm --name addons-091599-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-091599 --entrypoint /usr/bin/test -v addons-091599:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0520 10:25:28.758648 1469715 cli_runner.go:217] Completed: docker run --rm --name addons-091599-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-091599 --entrypoint /usr/bin/test -v addons-091599:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib: (1.969286568s)
	I0520 10:25:28.758677 1469715 oci.go:107] Successfully prepared a docker volume addons-091599
	I0520 10:25:28.758704 1469715 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:25:28.758723 1469715 kic.go:194] Starting extracting preloaded images to volume ...
	I0520 10:25:28.758812 1469715 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-091599:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0520 10:25:32.888774 1469715 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-091599:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.129921014s)
	I0520 10:25:32.888808 1469715 kic.go:203] duration metric: took 4.130080764s to extract preloaded images to volume ...
	W0520 10:25:32.888961 1469715 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0520 10:25:32.889077 1469715 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0520 10:25:32.946323 1469715 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-091599 --name addons-091599 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-091599 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-091599 --network addons-091599 --ip 192.168.49.2 --volume addons-091599:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0520 10:25:33.285520 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Running}}
	I0520 10:25:33.309772 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:25:33.332766 1469715 cli_runner.go:164] Run: docker exec addons-091599 stat /var/lib/dpkg/alternatives/iptables
	I0520 10:25:33.409349 1469715 oci.go:144] the created container "addons-091599" has a running status.
	I0520 10:25:33.409381 1469715 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa...
	I0520 10:25:33.600927 1469715 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0520 10:25:33.626113 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:25:33.645397 1469715 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0520 10:25:33.645417 1469715 kic_runner.go:114] Args: [docker exec --privileged addons-091599 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0520 10:25:33.717170 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:25:33.737840 1469715 machine.go:94] provisionDockerMachine start ...
	I0520 10:25:33.737926 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:33.762824 1469715 main.go:141] libmachine: Using SSH client type: native
	I0520 10:25:33.763192 1469715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40497 <nil> <nil>}
	I0520 10:25:33.763211 1469715 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 10:25:33.763904 1469715 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0520 10:25:36.893223 1469715 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-091599
	
	I0520 10:25:36.893249 1469715 ubuntu.go:169] provisioning hostname "addons-091599"
	I0520 10:25:36.893324 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:36.911530 1469715 main.go:141] libmachine: Using SSH client type: native
	I0520 10:25:36.911784 1469715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40497 <nil> <nil>}
	I0520 10:25:36.911803 1469715 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-091599 && echo "addons-091599" | sudo tee /etc/hostname
	I0520 10:25:37.050995 1469715 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-091599
	
	I0520 10:25:37.051098 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:37.068361 1469715 main.go:141] libmachine: Using SSH client type: native
	I0520 10:25:37.068603 1469715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40497 <nil> <nil>}
	I0520 10:25:37.068619 1469715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-091599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-091599/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-091599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:25:37.193660 1469715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:25:37.193687 1469715 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18925-1463640/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-1463640/.minikube}
	I0520 10:25:37.193711 1469715 ubuntu.go:177] setting up certificates
	I0520 10:25:37.193720 1469715 provision.go:84] configureAuth start
	I0520 10:25:37.193789 1469715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-091599
	I0520 10:25:37.216602 1469715 provision.go:143] copyHostCerts
	I0520 10:25:37.216685 1469715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.pem (1082 bytes)
	I0520 10:25:37.216820 1469715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/cert.pem (1123 bytes)
	I0520 10:25:37.216878 1469715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/key.pem (1679 bytes)
	I0520 10:25:37.216922 1469715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem org=jenkins.addons-091599 san=[127.0.0.1 192.168.49.2 addons-091599 localhost minikube]
	I0520 10:25:37.836687 1469715 provision.go:177] copyRemoteCerts
	I0520 10:25:37.836819 1469715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:25:37.836863 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:37.852667 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:25:37.942307 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 10:25:37.966185 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 10:25:37.989888 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 10:25:38.016225 1469715 provision.go:87] duration metric: took 822.492148ms to configureAuth
	I0520 10:25:38.016254 1469715 ubuntu.go:193] setting minikube options for container-runtime
	I0520 10:25:38.016480 1469715 config.go:182] Loaded profile config "addons-091599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:25:38.016596 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:38.034046 1469715 main.go:141] libmachine: Using SSH client type: native
	I0520 10:25:38.034298 1469715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40497 <nil> <nil>}
	I0520 10:25:38.034319 1469715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:25:38.265065 1469715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:25:38.265086 1469715 machine.go:97] duration metric: took 4.527227871s to provisionDockerMachine
	I0520 10:25:38.265102 1469715 client.go:171] duration metric: took 12.222379788s to LocalClient.Create
	I0520 10:25:38.265114 1469715 start.go:167] duration metric: took 12.222445706s to libmachine.API.Create "addons-091599"
	I0520 10:25:38.265121 1469715 start.go:293] postStartSetup for "addons-091599" (driver="docker")
	I0520 10:25:38.265132 1469715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:25:38.265199 1469715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:25:38.265239 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:38.285298 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:25:38.378680 1469715 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:25:38.381724 1469715 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0520 10:25:38.381762 1469715 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0520 10:25:38.381774 1469715 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0520 10:25:38.381782 1469715 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0520 10:25:38.381797 1469715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-1463640/.minikube/addons for local assets ...
	I0520 10:25:38.381864 1469715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-1463640/.minikube/files for local assets ...
	I0520 10:25:38.381900 1469715 start.go:296] duration metric: took 116.773025ms for postStartSetup
	I0520 10:25:38.382203 1469715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-091599
	I0520 10:25:38.397956 1469715 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/config.json ...
	I0520 10:25:38.398243 1469715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:25:38.398294 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:38.413400 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:25:38.506518 1469715 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0520 10:25:38.510812 1469715 start.go:128] duration metric: took 12.470768346s to createHost
	I0520 10:25:38.510839 1469715 start.go:83] releasing machines lock for "addons-091599", held for 12.470923428s
	I0520 10:25:38.510912 1469715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-091599
	I0520 10:25:38.527317 1469715 ssh_runner.go:195] Run: cat /version.json
	I0520 10:25:38.527388 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:38.527439 1469715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:25:38.527508 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:38.559200 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:25:38.560136 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:25:38.645352 1469715 ssh_runner.go:195] Run: systemctl --version
	I0520 10:25:38.759630 1469715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:25:38.903020 1469715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 10:25:38.907324 1469715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:25:38.929290 1469715 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0520 10:25:38.929415 1469715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:25:38.960715 1469715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0520 10:25:38.960737 1469715 start.go:494] detecting cgroup driver to use...
	I0520 10:25:38.960775 1469715 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0520 10:25:38.960826 1469715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:25:38.979364 1469715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:25:38.992856 1469715 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:25:38.992994 1469715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:25:39.009434 1469715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:25:39.026353 1469715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:25:39.128752 1469715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:25:39.226715 1469715 docker.go:233] disabling docker service ...
	I0520 10:25:39.226829 1469715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:25:39.249326 1469715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:25:39.262535 1469715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:25:39.354753 1469715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:25:39.455469 1469715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:25:39.466921 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:25:39.483780 1469715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:25:39.483852 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.494428 1469715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:25:39.494520 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.504557 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.514380 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.524517 1469715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:25:39.533760 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.544101 1469715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.560573 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.571064 1469715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:25:39.580236 1469715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:25:39.589253 1469715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:25:39.674366 1469715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:25:39.803394 1469715 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:25:39.803505 1469715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:25:39.807728 1469715 start.go:562] Will wait 60s for crictl version
	I0520 10:25:39.807820 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:25:39.811172 1469715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:25:39.850093 1469715 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0520 10:25:39.850225 1469715 ssh_runner.go:195] Run: crio --version
	I0520 10:25:39.892660 1469715 ssh_runner.go:195] Run: crio --version
	I0520 10:25:39.934189 1469715 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.24.6 ...
	I0520 10:25:39.936064 1469715 cli_runner.go:164] Run: docker network inspect addons-091599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0520 10:25:39.950266 1469715 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0520 10:25:39.953884 1469715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:25:39.964412 1469715 kubeadm.go:877] updating cluster {Name:addons-091599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-091599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 10:25:39.964542 1469715 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:25:39.964609 1469715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:25:40.058719 1469715 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:25:40.058743 1469715 crio.go:433] Images already preloaded, skipping extraction
	I0520 10:25:40.058810 1469715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:25:40.099576 1469715 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:25:40.099603 1469715 cache_images.go:84] Images are preloaded, skipping loading
	I0520 10:25:40.099613 1469715 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 crio true true} ...
	I0520 10:25:40.099726 1469715 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-091599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-091599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:25:40.099834 1469715 ssh_runner.go:195] Run: crio config
	I0520 10:25:40.150330 1469715 cni.go:84] Creating CNI manager for ""
	I0520 10:25:40.150354 1469715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 10:25:40.150363 1469715 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 10:25:40.150408 1469715 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-091599 NodeName:addons-091599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 10:25:40.150588 1469715 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-091599"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 10:25:40.150669 1469715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:25:40.160762 1469715 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 10:25:40.160880 1469715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 10:25:40.170118 1469715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0520 10:25:40.188254 1469715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:25:40.206672 1469715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0520 10:25:40.224342 1469715 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0520 10:25:40.227700 1469715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:25:40.238259 1469715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:25:40.319043 1469715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:25:40.332592 1469715 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599 for IP: 192.168.49.2
	I0520 10:25:40.332670 1469715 certs.go:194] generating shared ca certs ...
	I0520 10:25:40.332703 1469715 certs.go:226] acquiring lock for ca certs: {Name:mke113fbac30e255083f63bab9dafb629ead7667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:40.332874 1469715 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key
	I0520 10:25:40.587546 1469715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt ...
	I0520 10:25:40.587581 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt: {Name:mka4f6d7c1010d187841c8e9323a4a2f71d05d5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:40.587813 1469715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key ...
	I0520 10:25:40.587828 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key: {Name:mk3e2eb9d9ca29aa42fb5e69046b5e5858b088cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:40.587927 1469715 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key
	I0520 10:25:40.949528 1469715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.crt ...
	I0520 10:25:40.949562 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.crt: {Name:mk9ab1563bc061863c83b70b953645aba3460f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:40.950818 1469715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key ...
	I0520 10:25:40.950839 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key: {Name:mkcbd84db3d5170ba1258a231d433cab816854ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:40.950972 1469715 certs.go:256] generating profile certs ...
	I0520 10:25:40.951042 1469715 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.key
	I0520 10:25:40.951065 1469715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt with IP's: []
	I0520 10:25:41.829835 1469715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt ...
	I0520 10:25:41.829881 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: {Name:mkb5fcf325622e3c9a0048438f88c8b12065563b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:41.830088 1469715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.key ...
	I0520 10:25:41.830103 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.key: {Name:mkd2dc77edbe3f95362d0d11399740c9ccfbe043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:41.830197 1469715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key.e700fd87
	I0520 10:25:41.830225 1469715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt.e700fd87 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0520 10:25:42.187828 1469715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt.e700fd87 ...
	I0520 10:25:42.187871 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt.e700fd87: {Name:mk210e5cd684329a0af9a80844914a0601cf4e99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:42.188590 1469715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key.e700fd87 ...
	I0520 10:25:42.188618 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key.e700fd87: {Name:mk483ff63e42360c514f19d5304d4a7595702090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:42.188770 1469715 certs.go:381] copying /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt.e700fd87 -> /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt
	I0520 10:25:42.188872 1469715 certs.go:385] copying /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key.e700fd87 -> /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key
	I0520 10:25:42.188953 1469715 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.key
	I0520 10:25:42.188990 1469715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.crt with IP's: []
	I0520 10:25:42.720110 1469715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.crt ...
	I0520 10:25:42.720145 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.crt: {Name:mk755bdcbc9aac122bf017d6e27211d5de37f0f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:42.720806 1469715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.key ...
	I0520 10:25:42.720825 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.key: {Name:mk936e69555d89195e2c964aa757467423411687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:42.721030 1469715 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 10:25:42.721077 1469715 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem (1082 bytes)
	I0520 10:25:42.721107 1469715 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:25:42.721140 1469715 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem (1679 bytes)
	I0520 10:25:42.721784 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:25:42.747166 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:25:42.771351 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:25:42.797107 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 10:25:42.820806 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 10:25:42.844977 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 10:25:42.868336 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:25:42.891509 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 10:25:42.916132 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:25:42.941488 1469715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 10:25:42.959322 1469715 ssh_runner.go:195] Run: openssl version
	I0520 10:25:42.964660 1469715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:25:42.974046 1469715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:25:42.977508 1469715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:25:42.977578 1469715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:25:42.984471 1469715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:25:42.993801 1469715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:25:42.997045 1469715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 10:25:42.997100 1469715 kubeadm.go:391] StartCluster: {Name:addons-091599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-091599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:25:42.997190 1469715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 10:25:42.997258 1469715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 10:25:43.038409 1469715 cri.go:89] found id: ""
	I0520 10:25:43.038530 1469715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 10:25:43.047529 1469715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 10:25:43.056759 1469715 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0520 10:25:43.056829 1469715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 10:25:43.065876 1469715 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 10:25:43.065896 1469715 kubeadm.go:156] found existing configuration files:
	
	I0520 10:25:43.065951 1469715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 10:25:43.074759 1469715 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 10:25:43.074829 1469715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 10:25:43.083363 1469715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 10:25:43.092022 1469715 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 10:25:43.092120 1469715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 10:25:43.100272 1469715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 10:25:43.108570 1469715 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 10:25:43.108651 1469715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 10:25:43.116988 1469715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 10:25:43.125362 1469715 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 10:25:43.125455 1469715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 10:25:43.133954 1469715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0520 10:25:43.180900 1469715 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 10:25:43.181128 1469715 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 10:25:43.219107 1469715 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0520 10:25:43.219223 1469715 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1058-aws
	I0520 10:25:43.219276 1469715 kubeadm.go:309] OS: Linux
	I0520 10:25:43.219345 1469715 kubeadm.go:309] CGROUPS_CPU: enabled
	I0520 10:25:43.219418 1469715 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0520 10:25:43.219490 1469715 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0520 10:25:43.219553 1469715 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0520 10:25:43.219623 1469715 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0520 10:25:43.219694 1469715 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0520 10:25:43.219763 1469715 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0520 10:25:43.219830 1469715 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0520 10:25:43.219899 1469715 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0520 10:25:43.297971 1469715 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 10:25:43.298125 1469715 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 10:25:43.298239 1469715 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 10:25:43.564279 1469715 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 10:25:43.569008 1469715 out.go:204]   - Generating certificates and keys ...
	I0520 10:25:43.569210 1469715 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 10:25:43.569301 1469715 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 10:25:43.926684 1469715 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 10:25:44.317123 1469715 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 10:25:44.608533 1469715 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 10:25:44.786528 1469715 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 10:25:45.235651 1469715 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 10:25:45.235819 1469715 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-091599 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0520 10:25:45.983063 1469715 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 10:25:45.983206 1469715 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-091599 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0520 10:25:46.681532 1469715 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 10:25:47.064603 1469715 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 10:25:47.262685 1469715 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 10:25:47.262957 1469715 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 10:25:47.633283 1469715 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 10:25:47.987177 1469715 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 10:25:48.585760 1469715 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 10:25:48.728731 1469715 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 10:25:50.176830 1469715 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 10:25:50.177716 1469715 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 10:25:50.182372 1469715 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 10:25:50.184464 1469715 out.go:204]   - Booting up control plane ...
	I0520 10:25:50.184567 1469715 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 10:25:50.184644 1469715 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 10:25:50.185368 1469715 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 10:25:50.195670 1469715 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 10:25:50.196937 1469715 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 10:25:50.197007 1469715 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 10:25:50.286247 1469715 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 10:25:50.286339 1469715 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 10:25:51.787533 1469715 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.501460471s
	I0520 10:25:51.787619 1469715 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 10:25:57.289145 1469715 kubeadm.go:309] [api-check] The API server is healthy after 5.501823227s
	I0520 10:25:57.311864 1469715 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 10:25:57.326168 1469715 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 10:25:57.349968 1469715 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 10:25:57.350158 1469715 kubeadm.go:309] [mark-control-plane] Marking the node addons-091599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 10:25:57.361988 1469715 kubeadm.go:309] [bootstrap-token] Using token: zcfe6y.yzbrm53m11ivv0h7
	I0520 10:25:57.363945 1469715 out.go:204]   - Configuring RBAC rules ...
	I0520 10:25:57.364078 1469715 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 10:25:57.370969 1469715 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 10:25:57.384207 1469715 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 10:25:57.387563 1469715 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 10:25:57.391438 1469715 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 10:25:57.397185 1469715 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 10:25:57.696324 1469715 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 10:25:58.128329 1469715 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 10:25:58.695809 1469715 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 10:25:58.696800 1469715 kubeadm.go:309] 
	I0520 10:25:58.696876 1469715 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 10:25:58.696890 1469715 kubeadm.go:309] 
	I0520 10:25:58.696969 1469715 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 10:25:58.696981 1469715 kubeadm.go:309] 
	I0520 10:25:58.697007 1469715 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 10:25:58.697070 1469715 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 10:25:58.697124 1469715 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 10:25:58.697132 1469715 kubeadm.go:309] 
	I0520 10:25:58.697184 1469715 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 10:25:58.697192 1469715 kubeadm.go:309] 
	I0520 10:25:58.697238 1469715 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 10:25:58.697246 1469715 kubeadm.go:309] 
	I0520 10:25:58.697296 1469715 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 10:25:58.697381 1469715 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 10:25:58.697451 1469715 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 10:25:58.697459 1469715 kubeadm.go:309] 
	I0520 10:25:58.697545 1469715 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 10:25:58.697622 1469715 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 10:25:58.697630 1469715 kubeadm.go:309] 
	I0520 10:25:58.697726 1469715 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zcfe6y.yzbrm53m11ivv0h7 \
	I0520 10:25:58.697829 1469715 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e4ec3248f7179a7e7b3262b27f9565d878f3b66abe6f06904dcca5f386d0f173 \
	I0520 10:25:58.697854 1469715 kubeadm.go:309] 	--control-plane 
	I0520 10:25:58.697862 1469715 kubeadm.go:309] 
	I0520 10:25:58.697944 1469715 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 10:25:58.697952 1469715 kubeadm.go:309] 
	I0520 10:25:58.698030 1469715 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zcfe6y.yzbrm53m11ivv0h7 \
	I0520 10:25:58.698132 1469715 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e4ec3248f7179a7e7b3262b27f9565d878f3b66abe6f06904dcca5f386d0f173 
	I0520 10:25:58.701638 1469715 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1058-aws\n", err: exit status 1
	I0520 10:25:58.701783 1469715 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 10:25:58.701820 1469715 cni.go:84] Creating CNI manager for ""
	I0520 10:25:58.701834 1469715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 10:25:58.704152 1469715 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 10:25:58.705748 1469715 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 10:25:58.711243 1469715 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 10:25:58.711269 1469715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 10:25:58.734331 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 10:25:58.990818 1469715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 10:25:58.990987 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-091599 minikube.k8s.io/updated_at=2024_05_20T10_25_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=addons-091599 minikube.k8s.io/primary=true
	I0520 10:25:58.991011 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:25:59.137983 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:25:59.138059 1469715 ops.go:34] apiserver oom_adj: -16
	I0520 10:25:59.638875 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:00.139060 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:00.638694 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:01.138150 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:01.638545 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:02.138119 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:02.638803 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:03.138106 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:03.638255 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:04.138295 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:04.638113 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:05.138909 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:05.638804 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:06.138873 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:06.638372 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:07.138944 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:07.638487 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:08.138869 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:08.638644 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:09.138285 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:09.638284 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:10.138132 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:10.638517 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:11.138923 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:11.229865 1469715 kubeadm.go:1107] duration metric: took 12.238969721s to wait for elevateKubeSystemPrivileges
	W0520 10:26:11.229904 1469715 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 10:26:11.229912 1469715 kubeadm.go:393] duration metric: took 28.232819608s to StartCluster
	I0520 10:26:11.229929 1469715 settings.go:142] acquiring lock: {Name:mkcb442de9baf8dd2fb339ccf162868e80429e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:26:11.230508 1469715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 10:26:11.230901 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/kubeconfig: {Name:mk86e76ecc665bde4f67c226ceb67716f06a54d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:26:11.231127 1469715 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:26:11.233779 1469715 out.go:177] * Verifying Kubernetes components...
	I0520 10:26:11.231228 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 10:26:11.231390 1469715 config.go:182] Loaded profile config "addons-091599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:26:11.231399 1469715 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0520 10:26:11.235785 1469715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:26:11.235790 1469715 addons.go:69] Setting yakd=true in profile "addons-091599"
	I0520 10:26:11.235820 1469715 addons.go:234] Setting addon yakd=true in "addons-091599"
	I0520 10:26:11.235859 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.235876 1469715 addons.go:69] Setting ingress-dns=true in profile "addons-091599"
	I0520 10:26:11.235897 1469715 addons.go:234] Setting addon ingress-dns=true in "addons-091599"
	I0520 10:26:11.235925 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.236365 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.236413 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.237027 1469715 addons.go:69] Setting cloud-spanner=true in profile "addons-091599"
	I0520 10:26:11.237077 1469715 addons.go:234] Setting addon cloud-spanner=true in "addons-091599"
	I0520 10:26:11.237105 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.237587 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.237992 1469715 addons.go:69] Setting inspektor-gadget=true in profile "addons-091599"
	I0520 10:26:11.238024 1469715 addons.go:234] Setting addon inspektor-gadget=true in "addons-091599"
	I0520 10:26:11.238052 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.238440 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.240137 1469715 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-091599"
	I0520 10:26:11.240227 1469715 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-091599"
	I0520 10:26:11.240261 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.240757 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.242743 1469715 addons.go:69] Setting metrics-server=true in profile "addons-091599"
	I0520 10:26:11.242794 1469715 addons.go:234] Setting addon metrics-server=true in "addons-091599"
	I0520 10:26:11.242831 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.243303 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.247496 1469715 addons.go:69] Setting default-storageclass=true in profile "addons-091599"
	I0520 10:26:11.247570 1469715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-091599"
	I0520 10:26:11.247938 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.250377 1469715 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-091599"
	I0520 10:26:11.250439 1469715 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-091599"
	I0520 10:26:11.250481 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.252737 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.259581 1469715 addons.go:69] Setting gcp-auth=true in profile "addons-091599"
	I0520 10:26:11.259649 1469715 mustload.go:65] Loading cluster: addons-091599
	I0520 10:26:11.259875 1469715 config.go:182] Loaded profile config "addons-091599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:26:11.260175 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.266089 1469715 addons.go:69] Setting registry=true in profile "addons-091599"
	I0520 10:26:11.266142 1469715 addons.go:234] Setting addon registry=true in "addons-091599"
	I0520 10:26:11.266187 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.266767 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.275218 1469715 addons.go:69] Setting ingress=true in profile "addons-091599"
	I0520 10:26:11.275270 1469715 addons.go:234] Setting addon ingress=true in "addons-091599"
	I0520 10:26:11.275328 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.275899 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.276082 1469715 addons.go:69] Setting storage-provisioner=true in profile "addons-091599"
	I0520 10:26:11.276107 1469715 addons.go:234] Setting addon storage-provisioner=true in "addons-091599"
	I0520 10:26:11.276139 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.276567 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.305036 1469715 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-091599"
	I0520 10:26:11.305165 1469715 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-091599"
	I0520 10:26:11.305742 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.328375 1469715 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0520 10:26:11.327283 1469715 addons.go:69] Setting volumesnapshots=true in profile "addons-091599"
	I0520 10:26:11.338133 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.353638 1469715 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 10:26:11.353702 1469715 addons.go:234] Setting addon volumesnapshots=true in "addons-091599"
	I0520 10:26:11.359982 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.360662 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.361226 1469715 out.go:177]   - Using image docker.io/registry:2.8.3
	I0520 10:26:11.386671 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0520 10:26:11.373798 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0520 10:26:11.401066 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0520 10:26:11.455646 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0520 10:26:11.405928 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.458996 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0520 10:26:11.464466 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0520 10:26:11.466767 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0520 10:26:11.469424 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0520 10:26:11.472908 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0520 10:26:11.467010 1469715 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0520 10:26:11.467016 1469715 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0520 10:26:11.467954 1469715 addons.go:234] Setting addon default-storageclass=true in "addons-091599"
	I0520 10:26:11.475290 1469715 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-091599"
	I0520 10:26:11.480890 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0520 10:26:11.481071 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.483082 1469715 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0520 10:26:11.483090 1469715 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	I0520 10:26:11.483096 1469715 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0520 10:26:11.483184 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.483197 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0520 10:26:11.485159 1469715 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 10:26:11.485270 1469715 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0520 10:26:11.489248 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.497923 1469715 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0520 10:26:11.492766 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.492797 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.492808 1469715 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 10:26:11.492814 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0520 10:26:11.499988 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.522179 1469715 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:26:11.511051 1469715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 10:26:11.511174 1469715 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0520 10:26:11.511182 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0520 10:26:11.511186 1469715 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0520 10:26:11.515471 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.530461 1469715 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0520 10:26:11.532274 1469715 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 10:26:11.532299 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0520 10:26:11.532378 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.555861 1469715 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:26:11.529080 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0520 10:26:11.529089 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0520 10:26:11.529094 1469715 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0520 10:26:11.558182 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.563754 1469715 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:26:11.563775 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 10:26:11.563841 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.581220 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.590146 1469715 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 10:26:11.590169 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0520 10:26:11.590237 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.611056 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.625791 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0520 10:26:11.632538 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0520 10:26:11.632578 1469715 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0520 10:26:11.632676 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.644066 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.719137 1469715 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 10:26:11.719160 1469715 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 10:26:11.719246 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.741418 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.756201 1469715 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0520 10:26:11.759168 1469715 out.go:177]   - Using image docker.io/busybox:stable
	I0520 10:26:11.761517 1469715 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 10:26:11.761539 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0520 10:26:11.761612 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.763570 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.765173 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.768164 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.778293 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.796773 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.798544 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 10:26:11.798688 1469715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:26:11.814090 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.820462 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.841928 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.853823 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.862780 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.875702 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:12.102263 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 10:26:12.189319 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 10:26:12.235970 1469715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 10:26:12.236006 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0520 10:26:12.244210 1469715 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0520 10:26:12.244232 1469715 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0520 10:26:12.249611 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 10:26:12.299325 1469715 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0520 10:26:12.299411 1469715 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0520 10:26:12.325242 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0520 10:26:12.325347 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0520 10:26:12.338195 1469715 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0520 10:26:12.338299 1469715 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0520 10:26:12.338709 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:26:12.350466 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0520 10:26:12.362260 1469715 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0520 10:26:12.362344 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0520 10:26:12.450976 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 10:26:12.464319 1469715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 10:26:12.464348 1469715 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 10:26:12.476032 1469715 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0520 10:26:12.476057 1469715 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0520 10:26:12.484462 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0520 10:26:12.484500 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0520 10:26:12.505221 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0520 10:26:12.556846 1469715 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0520 10:26:12.556873 1469715 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0520 10:26:12.565968 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 10:26:12.585576 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0520 10:26:12.585601 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0520 10:26:12.644557 1469715 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0520 10:26:12.644582 1469715 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0520 10:26:12.677454 1469715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 10:26:12.677479 1469715 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 10:26:12.701831 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0520 10:26:12.701856 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0520 10:26:12.754277 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0520 10:26:12.754302 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0520 10:26:12.769867 1469715 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0520 10:26:12.769895 1469715 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0520 10:26:12.813419 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0520 10:26:12.813444 1469715 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0520 10:26:12.871103 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 10:26:12.915806 1469715 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0520 10:26:12.915829 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0520 10:26:12.951391 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0520 10:26:12.951415 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0520 10:26:12.961125 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0520 10:26:12.961150 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0520 10:26:12.991189 1469715 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:26:12.991214 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0520 10:26:13.077429 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0520 10:26:13.077455 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0520 10:26:13.087100 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0520 10:26:13.099936 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0520 10:26:13.099962 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0520 10:26:13.132330 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:26:13.148815 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0520 10:26:13.148844 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0520 10:26:13.171125 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0520 10:26:13.171152 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0520 10:26:13.260548 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0520 10:26:13.260572 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0520 10:26:13.263041 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 10:26:13.263066 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0520 10:26:13.356130 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0520 10:26:13.356164 1469715 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0520 10:26:13.396778 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 10:26:13.439080 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0520 10:26:13.439106 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0520 10:26:13.634073 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0520 10:26:13.634142 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0520 10:26:13.804493 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 10:26:13.804584 1469715 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0520 10:26:13.907402 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 10:26:14.992240 1469715 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.193658049s)
	I0520 10:26:14.992267 1469715 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0520 10:26:14.993386 1469715 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.194679507s)
	I0520 10:26:14.994567 1469715 node_ready.go:35] waiting up to 6m0s for node "addons-091599" to be "Ready" ...
	I0520 10:26:15.811252 1469715 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-091599" context rescaled to 1 replicas
	I0520 10:26:16.316440 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.214087025s)
	I0520 10:26:16.826679 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.637276999s)
	I0520 10:26:16.826863 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.577137353s)
	I0520 10:26:16.826920 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.488167038s)
	I0520 10:26:16.826974 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.47644844s)
	I0520 10:26:17.029739 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:17.844451 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.393435631s)
	I0520 10:26:17.844496 1469715 addons.go:470] Verifying addon ingress=true in "addons-091599"
	I0520 10:26:17.847278 1469715 out.go:177] * Verifying ingress addon...
	I0520 10:26:17.844650 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.339400652s)
	I0520 10:26:17.844668 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.278677511s)
	I0520 10:26:17.844809 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.973678705s)
	I0520 10:26:17.844839 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.757713441s)
	I0520 10:26:17.844937 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.712578193s)
	I0520 10:26:17.845021 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.448206871s)
	I0520 10:26:17.851142 1469715 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0520 10:26:17.851370 1469715 addons.go:470] Verifying addon registry=true in "addons-091599"
	I0520 10:26:17.854522 1469715 out.go:177] * Verifying registry addon...
	I0520 10:26:17.851706 1469715 addons.go:470] Verifying addon metrics-server=true in "addons-091599"
	W0520 10:26:17.851731 1469715 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 10:26:17.857192 1469715 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-091599 service yakd-dashboard -n yakd-dashboard
	
	I0520 10:26:17.857244 1469715 retry.go:31] will retry after 161.96594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 10:26:17.858186 1469715 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0520 10:26:17.878996 1469715 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0520 10:26:17.879024 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:17.891388 1469715 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 10:26:17.891414 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:18.032071 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:26:18.386920 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:18.387650 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:18.586958 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.679438936s)
	I0520 10:26:18.586994 1469715 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-091599"
	I0520 10:26:18.590148 1469715 out.go:177] * Verifying csi-hostpath-driver addon...
	I0520 10:26:18.595176 1469715 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0520 10:26:18.674366 1469715 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 10:26:18.674390 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:18.855261 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:18.874851 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:19.103647 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:19.355453 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:19.376514 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:19.498843 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:19.616306 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:19.855443 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:19.873860 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:20.100930 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:20.355580 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:20.374334 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:20.607615 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:20.856298 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:20.873366 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:21.126424 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:21.157832 1469715 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0520 10:26:21.157977 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:21.209990 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:21.356472 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:21.406989 1469715 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0520 10:26:21.416612 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:21.451857 1469715 addons.go:234] Setting addon gcp-auth=true in "addons-091599"
	I0520 10:26:21.451911 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:21.452473 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:21.479464 1469715 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0520 10:26:21.479533 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:21.501417 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:21.506512 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.474391891s)
	I0520 10:26:21.517635 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:21.610806 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:21.627914 1469715 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:26:21.630590 1469715 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0520 10:26:21.633181 1469715 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0520 10:26:21.633250 1469715 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0520 10:26:21.660418 1469715 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0520 10:26:21.660492 1469715 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0520 10:26:21.683116 1469715 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 10:26:21.683187 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0520 10:26:21.704479 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 10:26:21.857447 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:21.873479 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:22.099646 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:22.328248 1469715 addons.go:470] Verifying addon gcp-auth=true in "addons-091599"
	I0520 10:26:22.331239 1469715 out.go:177] * Verifying gcp-auth addon...
	I0520 10:26:22.334738 1469715 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0520 10:26:22.338442 1469715 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0520 10:26:22.338466 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:22.355817 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:22.373806 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:22.602050 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:22.838697 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:22.855898 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:22.872880 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:23.100107 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:23.341221 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:23.357948 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:23.376101 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:23.604150 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:23.845959 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:23.856317 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:23.873688 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:23.998676 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:24.100123 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:24.338300 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:24.355945 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:24.372785 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:24.600731 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:24.839539 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:24.855622 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:24.873673 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:25.100181 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:25.338075 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:25.355896 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:25.373845 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:25.601226 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:25.838297 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:25.855211 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:25.873816 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:26.099547 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:26.338637 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:26.355648 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:26.373927 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:26.497763 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:26.603576 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:26.838634 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:26.855380 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:26.873263 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:27.099940 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:27.339238 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:27.356238 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:27.373083 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:27.599960 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:27.839001 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:27.855786 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:27.873754 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:28.099584 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:28.338073 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:28.356127 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:28.373013 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:28.498729 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:28.600654 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:28.839370 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:28.855560 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:28.873406 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:29.100212 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:29.340040 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:29.356423 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:29.373254 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:29.604705 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:29.838019 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:29.856060 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:29.872840 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:30.099896 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:30.338837 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:30.355742 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:30.373695 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:30.498794 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:30.600075 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:30.839246 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:30.855502 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:30.877105 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:31.100641 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:31.338644 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:31.355314 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:31.373302 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:31.613764 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:31.839344 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:31.855662 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:31.873394 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:32.100055 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:32.338727 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:32.355539 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:32.373488 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:32.599576 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:32.839658 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:32.855598 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:32.873421 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:32.998656 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:33.100111 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:33.338250 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:33.355666 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:33.373338 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:33.600362 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:33.838396 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:33.855941 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:33.873428 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:34.099895 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:34.338227 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:34.356048 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:34.373575 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:34.604088 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:34.839288 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:34.855142 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:34.872973 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:35.099889 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:35.338786 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:35.356221 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:35.373088 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:35.497908 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:35.599975 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:35.838565 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:35.854990 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:35.872952 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:36.099095 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:36.338721 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:36.355742 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:36.373803 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:36.608169 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:36.838847 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:36.855380 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:36.873223 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:37.100073 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:37.338432 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:37.355434 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:37.373386 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:37.499272 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:37.599976 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:37.839639 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:37.855906 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:37.873609 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:38.099912 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:38.338377 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:38.356022 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:38.372724 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:38.601467 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:38.839163 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:38.854929 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:38.872896 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:39.099974 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:39.339557 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:39.355100 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:39.372896 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:39.604734 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:39.838606 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:39.855064 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:39.874195 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:39.998189 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:40.100705 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:40.338678 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:40.355435 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:40.373290 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:40.601248 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:40.839098 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:40.855979 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:40.872905 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:41.100074 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:41.339783 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:41.356368 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:41.373386 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:41.604612 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:41.841066 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:41.855605 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:41.873675 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:42.105109 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:42.338973 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:42.356213 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:42.373509 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:42.498810 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:42.600416 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:42.839949 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:42.855868 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:42.873784 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:43.099291 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:43.338748 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:43.355665 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:43.373676 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:43.599691 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:43.839243 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:43.855573 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:43.873419 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:44.099590 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:44.338071 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:44.355829 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:44.373828 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:44.601955 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:44.839035 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:44.856189 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:44.872885 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:44.997599 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:45.100227 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:45.338675 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:45.354960 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:45.372765 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:45.604670 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:45.839597 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:45.854989 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:45.874793 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:46.099866 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:46.338527 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:46.355444 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:46.373247 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:46.521067 1469715 node_ready.go:49] node "addons-091599" has status "Ready":"True"
	I0520 10:26:46.521143 1469715 node_ready.go:38] duration metric: took 31.526499444s for node "addons-091599" to be "Ready" ...
	I0520 10:26:46.521169 1469715 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:26:46.547669 1469715 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b9xf7" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:46.606094 1469715 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 10:26:46.606163 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:46.995668 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:46.996342 1469715 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 10:26:46.996373 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:46.996408 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:47.136259 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:47.379270 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:47.380101 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:47.384260 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:47.600994 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:47.840979 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:47.856363 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:47.873940 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:48.101881 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:48.338397 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:48.357168 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:48.377443 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:48.555657 1469715 pod_ready.go:102] pod "coredns-7db6d8ff4d-b9xf7" in "kube-system" namespace has status "Ready":"False"
	I0520 10:26:48.611229 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:48.839389 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:48.859852 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:48.877457 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:49.104865 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:49.338914 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:49.355756 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:49.374554 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:49.553763 1469715 pod_ready.go:92] pod "coredns-7db6d8ff4d-b9xf7" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.553829 1469715 pod_ready.go:81] duration metric: took 3.006087336s for pod "coredns-7db6d8ff4d-b9xf7" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.553858 1469715 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.558976 1469715 pod_ready.go:92] pod "etcd-addons-091599" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.559002 1469715 pod_ready.go:81] duration metric: took 5.136115ms for pod "etcd-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.559017 1469715 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.564282 1469715 pod_ready.go:92] pod "kube-apiserver-addons-091599" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.564348 1469715 pod_ready.go:81] duration metric: took 5.322474ms for pod "kube-apiserver-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.564367 1469715 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.570345 1469715 pod_ready.go:92] pod "kube-controller-manager-addons-091599" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.570368 1469715 pod_ready.go:81] duration metric: took 5.992358ms for pod "kube-controller-manager-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.570382 1469715 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mxn9s" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.575686 1469715 pod_ready.go:92] pod "kube-proxy-mxn9s" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.575713 1469715 pod_ready.go:81] duration metric: took 5.305489ms for pod "kube-proxy-mxn9s" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.575725 1469715 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.602013 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:49.838692 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:49.855743 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:49.874125 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:49.951268 1469715 pod_ready.go:92] pod "kube-scheduler-addons-091599" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.951298 1469715 pod_ready.go:81] duration metric: took 375.564925ms for pod "kube-scheduler-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.951311 1469715 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:50.106016 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:50.340229 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:50.356972 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:50.374234 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:50.621946 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:50.841744 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:50.856822 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:50.874380 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:51.101593 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:51.340157 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:51.357199 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:51.373573 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:51.606279 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:51.840064 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:51.856671 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:51.875093 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:51.959117 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:26:52.102176 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:52.339200 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:52.357035 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:52.373577 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:52.602913 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:52.839386 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:52.856303 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:52.874211 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:53.102008 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:53.339620 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:53.356099 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:53.373929 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:53.611329 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:53.838803 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:53.856834 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:53.874351 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:54.102543 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:54.339430 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:54.357376 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:54.381949 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:54.463692 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:26:54.602318 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:54.839577 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:54.856282 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:54.873555 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:55.100825 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:55.338532 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:55.360371 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:55.373585 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:55.611863 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:55.838460 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:55.857676 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:55.874829 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:56.103894 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:56.338891 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:56.362971 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:56.373990 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:56.610425 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:56.841405 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:56.855601 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:56.874831 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:56.958552 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:26:57.102216 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:57.338507 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:57.356414 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:57.373731 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:57.607339 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:57.839367 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:57.855648 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:57.874113 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:58.100419 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:58.338771 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:58.356168 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:58.373420 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:58.601479 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:58.838177 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:58.855267 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:58.873721 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:59.101815 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:59.346870 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:59.361635 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:59.374857 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:59.459810 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:26:59.609732 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:59.839480 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:59.856242 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:59.874458 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:00.151410 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:00.367885 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:00.368860 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:00.384973 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:00.610620 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:00.838700 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:00.856609 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:00.874360 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:01.102262 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:01.339185 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:01.356140 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:01.374251 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:01.618792 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:01.839011 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:01.856033 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:01.873985 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:01.958497 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:02.118710 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:02.338063 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:02.355506 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:02.373839 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:02.613690 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:02.839451 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:02.856455 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:02.875486 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:03.101800 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:03.338757 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:03.356554 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:03.374628 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:03.613265 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:03.838710 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:03.855774 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:03.874485 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:04.102832 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:04.338272 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:04.355852 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:04.374693 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:04.459281 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:04.605493 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:04.839506 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:04.856266 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:04.874304 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:05.103025 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:05.339358 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:05.356746 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:05.374868 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:05.632006 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:05.841528 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:05.857263 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:05.882459 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:06.102319 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:06.342001 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:06.357538 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:06.376299 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:06.604522 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:06.842080 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:06.855927 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:06.881545 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:06.961936 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:07.102409 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:07.343704 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:07.367603 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:07.378466 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:07.610573 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:07.838609 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:07.855441 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:07.874290 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:08.101047 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:08.338418 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:08.355303 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:08.374187 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:08.612673 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:08.838410 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:08.855710 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:08.874719 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:09.101204 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:09.338662 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:09.355255 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:09.373761 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:09.479970 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:09.610488 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:09.838916 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:09.856079 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:09.876119 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:10.101967 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:10.338720 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:10.356136 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:10.375353 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:10.612824 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:10.840778 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:10.857078 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:10.882429 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:11.104104 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:11.339699 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:11.365226 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:11.376224 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:11.620652 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:11.838923 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:11.858172 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:11.874989 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:11.958994 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:12.104433 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:12.339469 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:12.356360 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:12.375382 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:12.603859 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:12.840014 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:12.857622 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:12.877536 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:13.105355 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:13.339010 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:13.356902 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:13.378363 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:13.601855 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:13.839380 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:13.856331 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:13.874078 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:13.959386 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:14.101087 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:14.339277 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:14.355415 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:14.373840 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:14.607888 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:14.838690 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:14.855444 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:14.873777 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:15.106328 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:15.339108 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:15.356466 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:15.374620 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:15.609572 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:15.838633 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:15.856242 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:15.873917 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:16.101282 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:16.339818 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:16.361398 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:16.381419 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:16.458445 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:16.613095 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:16.840391 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:16.857737 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:16.875685 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:17.101423 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:17.340216 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:17.362996 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:17.388164 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:17.608798 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:17.841025 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:17.857413 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:17.878160 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:18.103468 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:18.339247 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:18.372334 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:18.427096 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:18.466712 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:18.629544 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:18.840767 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:18.858702 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:18.876652 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:19.104221 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:19.340019 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:19.361339 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:19.382164 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:19.627274 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:19.841034 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:19.856805 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:19.876630 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:20.102871 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:20.338439 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:20.360266 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:20.377904 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:20.607894 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:20.840624 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:20.857532 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:20.873764 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:20.958939 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:21.115337 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:21.338792 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:21.356576 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:21.373908 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:21.619218 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:21.839931 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:21.856857 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:21.873875 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:22.101373 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:22.340377 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:22.355873 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:22.374378 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:22.604802 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:22.838951 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:22.856671 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:22.885623 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:23.103529 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:23.339273 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:23.356085 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:23.374011 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:23.458691 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:23.601624 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:23.838622 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:23.855299 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:23.874022 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:24.100797 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:24.339726 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:24.355962 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:24.374740 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:24.609218 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:24.839601 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:24.859199 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:24.874628 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:25.101998 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:25.339271 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:25.355867 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:25.373966 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:25.602524 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:25.839868 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:25.857612 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:25.874667 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:25.971539 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:26.102084 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:26.339739 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:26.365608 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:26.375705 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:26.612028 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:26.841436 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:26.857587 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:26.884517 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:27.123741 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:27.338826 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:27.357550 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:27.374656 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:27.610402 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:27.838669 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:27.856894 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:27.874828 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:28.102533 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:28.339188 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:28.356418 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:28.373903 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:28.458117 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:28.608182 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:28.839537 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:28.856240 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:28.873556 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:29.102201 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:29.338599 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:29.355812 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:29.374403 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:29.600899 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:29.839397 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:29.856031 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:29.874423 1469715 kapi.go:107] duration metric: took 1m12.016233008s to wait for kubernetes.io/minikube-addons=registry ...
	I0520 10:27:30.101068 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:30.338451 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:30.356901 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:30.460664 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:30.614532 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:30.842934 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:30.857435 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:31.100759 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:31.339291 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:31.355704 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:31.602544 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:31.838001 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:31.857152 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:32.102113 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:32.339174 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:32.357899 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:32.606591 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:32.839752 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:32.856033 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:32.965982 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:33.101876 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:33.344675 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:33.356481 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:33.634168 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:33.845589 1469715 kapi.go:107] duration metric: took 1m11.510844281s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0520 10:27:33.855180 1469715 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-091599 cluster.
	I0520 10:27:33.864892 1469715 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0520 10:27:33.873611 1469715 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0520 10:27:33.881926 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:34.108621 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:34.356779 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:34.612343 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:34.856917 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:35.127005 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:35.367619 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:35.459867 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:35.601540 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:35.856781 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:36.102258 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:36.355941 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:36.611114 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:36.878009 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:37.107420 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:37.356399 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:37.605011 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:37.859179 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:37.957228 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:38.107203 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:38.355856 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:38.600811 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:38.856121 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:39.101960 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:39.357068 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:39.606054 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:39.861805 1469715 kapi.go:107] duration metric: took 1m22.010661339s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0520 10:27:39.958064 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:40.105873 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:40.606303 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:41.101089 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:41.605969 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:42.101979 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:42.457835 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:42.604314 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:43.100480 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:43.602227 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:44.100844 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:44.459743 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:44.603761 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:45.102707 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:45.608930 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:46.101701 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:46.602617 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:46.957842 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:47.101866 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:47.605766 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:48.103993 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:48.600910 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:48.958039 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:49.101611 1469715 kapi.go:107] duration metric: took 1m30.506430912s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0520 10:27:49.103529 1469715 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0520 10:27:49.105119 1469715 addons.go:505] duration metric: took 1m37.873709639s for enable addons: enabled=[ingress-dns nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0520 10:27:51.458631 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:53.957414 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:55.957954 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:57.960888 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:00.464369 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:02.957313 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:04.958859 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:07.457951 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:09.957982 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:12.456914 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:14.458286 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:16.957076 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:18.957832 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:20.958011 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:22.959167 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:25.457915 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:27.457828 1469715 pod_ready.go:92] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"True"
	I0520 10:28:27.457856 1469715 pod_ready.go:81] duration metric: took 1m37.50653712s for pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace to be "Ready" ...
	I0520 10:28:27.457869 1469715 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xt86b" in "kube-system" namespace to be "Ready" ...
	I0520 10:28:27.462993 1469715 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-xt86b" in "kube-system" namespace has status "Ready":"True"
	I0520 10:28:27.463018 1469715 pod_ready.go:81] duration metric: took 5.141317ms for pod "nvidia-device-plugin-daemonset-xt86b" in "kube-system" namespace to be "Ready" ...
	I0520 10:28:27.463038 1469715 pod_ready.go:38] duration metric: took 1m40.94182784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:28:27.463052 1469715 api_server.go:52] waiting for apiserver process to appear ...
	I0520 10:28:27.463085 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:28:27.463149 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:28:27.514875 1469715 cri.go:89] found id: "733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:27.514900 1469715 cri.go:89] found id: ""
	I0520 10:28:27.514908 1469715 logs.go:276] 1 containers: [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b]
	I0520 10:28:27.514966 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.518734 1469715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:28:27.518810 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:28:27.562055 1469715 cri.go:89] found id: "afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:27.562079 1469715 cri.go:89] found id: ""
	I0520 10:28:27.562088 1469715 logs.go:276] 1 containers: [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3]
	I0520 10:28:27.562152 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.565394 1469715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:28:27.565474 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:28:27.606175 1469715 cri.go:89] found id: "a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:27.606198 1469715 cri.go:89] found id: ""
	I0520 10:28:27.606207 1469715 logs.go:276] 1 containers: [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b]
	I0520 10:28:27.606263 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.609904 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:28:27.609983 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:28:27.653357 1469715 cri.go:89] found id: "cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:27.653382 1469715 cri.go:89] found id: ""
	I0520 10:28:27.653390 1469715 logs.go:276] 1 containers: [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea]
	I0520 10:28:27.653454 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.657033 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:28:27.657107 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:28:27.694857 1469715 cri.go:89] found id: "8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:27.694881 1469715 cri.go:89] found id: ""
	I0520 10:28:27.694889 1469715 logs.go:276] 1 containers: [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65]
	I0520 10:28:27.694988 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.698305 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:28:27.698385 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:28:27.739425 1469715 cri.go:89] found id: "417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:27.739446 1469715 cri.go:89] found id: ""
	I0520 10:28:27.739454 1469715 logs.go:276] 1 containers: [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77]
	I0520 10:28:27.739512 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.742999 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:28:27.743076 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:28:27.785811 1469715 cri.go:89] found id: "eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:27.785838 1469715 cri.go:89] found id: ""
	I0520 10:28:27.785846 1469715 logs.go:276] 1 containers: [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da]
	I0520 10:28:27.785905 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.789365 1469715 logs.go:123] Gathering logs for kube-controller-manager [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77] ...
	I0520 10:28:27.789391 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:27.861032 1469715 logs.go:123] Gathering logs for kindnet [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da] ...
	I0520 10:28:27.861070 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:27.898675 1469715 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:28:27.898704 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:28:27.990785 1469715 logs.go:123] Gathering logs for container status ...
	I0520 10:28:27.990823 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:28:28.041944 1469715 logs.go:123] Gathering logs for kube-scheduler [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea] ...
	I0520 10:28:28.041976 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:28.085430 1469715 logs.go:123] Gathering logs for kube-proxy [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65] ...
	I0520 10:28:28.085467 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:28.135190 1469715 logs.go:123] Gathering logs for kubelet ...
	I0520 10:28:28.135220 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:28:28.185758 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.185971 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.188120 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.188331 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:28.224727 1469715 logs.go:123] Gathering logs for dmesg ...
	I0520 10:28:28.224758 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:28:28.244015 1469715 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:28:28.244050 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:28:28.409101 1469715 logs.go:123] Gathering logs for kube-apiserver [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b] ...
	I0520 10:28:28.409147 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:28.474928 1469715 logs.go:123] Gathering logs for etcd [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3] ...
	I0520 10:28:28.474960 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:28.531097 1469715 logs.go:123] Gathering logs for coredns [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b] ...
	I0520 10:28:28.531130 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:28.582043 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:28.582069 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:28:28.582128 1469715 out.go:239] X Problems detected in kubelet:
	W0520 10:28:28.582137 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.582144 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.582155 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.582166 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:28.582173 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:28.582178 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:28:38.583454 1469715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:28:38.599051 1469715 api_server.go:72] duration metric: took 2m27.367883564s to wait for apiserver process to appear ...
	I0520 10:28:38.599078 1469715 api_server.go:88] waiting for apiserver healthz status ...
	I0520 10:28:38.599112 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:28:38.599176 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:28:38.649407 1469715 cri.go:89] found id: "733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:38.649438 1469715 cri.go:89] found id: ""
	I0520 10:28:38.649445 1469715 logs.go:276] 1 containers: [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b]
	I0520 10:28:38.649502 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.653172 1469715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:28:38.653253 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:28:38.692978 1469715 cri.go:89] found id: "afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:38.693000 1469715 cri.go:89] found id: ""
	I0520 10:28:38.693008 1469715 logs.go:276] 1 containers: [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3]
	I0520 10:28:38.693072 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.696539 1469715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:28:38.696609 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:28:38.736066 1469715 cri.go:89] found id: "a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:38.736190 1469715 cri.go:89] found id: ""
	I0520 10:28:38.736206 1469715 logs.go:276] 1 containers: [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b]
	I0520 10:28:38.736297 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.740367 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:28:38.740449 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:28:38.781005 1469715 cri.go:89] found id: "cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:38.781073 1469715 cri.go:89] found id: ""
	I0520 10:28:38.781096 1469715 logs.go:276] 1 containers: [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea]
	I0520 10:28:38.781168 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.784808 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:28:38.784882 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:28:38.822327 1469715 cri.go:89] found id: "8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:38.822395 1469715 cri.go:89] found id: ""
	I0520 10:28:38.822411 1469715 logs.go:276] 1 containers: [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65]
	I0520 10:28:38.822480 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.826032 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:28:38.826108 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:28:38.866457 1469715 cri.go:89] found id: "417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:38.866480 1469715 cri.go:89] found id: ""
	I0520 10:28:38.866488 1469715 logs.go:276] 1 containers: [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77]
	I0520 10:28:38.866544 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.870103 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:28:38.870179 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:28:38.908330 1469715 cri.go:89] found id: "eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:38.908353 1469715 cri.go:89] found id: ""
	I0520 10:28:38.908361 1469715 logs.go:276] 1 containers: [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da]
	I0520 10:28:38.908476 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.911923 1469715 logs.go:123] Gathering logs for etcd [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3] ...
	I0520 10:28:38.911953 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:38.984383 1469715 logs.go:123] Gathering logs for coredns [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b] ...
	I0520 10:28:38.984421 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:39.036586 1469715 logs.go:123] Gathering logs for kube-scheduler [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea] ...
	I0520 10:28:39.036617 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:39.087722 1469715 logs.go:123] Gathering logs for kube-controller-manager [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77] ...
	I0520 10:28:39.087759 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:39.161394 1469715 logs.go:123] Gathering logs for kindnet [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da] ...
	I0520 10:28:39.161429 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:39.202412 1469715 logs.go:123] Gathering logs for container status ...
	I0520 10:28:39.202442 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:28:39.251600 1469715 logs.go:123] Gathering logs for kubelet ...
	I0520 10:28:39.251631 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:28:39.293885 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.294101 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.296222 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.296431 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:39.334382 1469715 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:28:39.334413 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:28:39.464764 1469715 logs.go:123] Gathering logs for kube-apiserver [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b] ...
	I0520 10:28:39.464797 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:39.541526 1469715 logs.go:123] Gathering logs for kube-proxy [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65] ...
	I0520 10:28:39.541563 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:39.582495 1469715 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:28:39.582523 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:28:39.678846 1469715 logs.go:123] Gathering logs for dmesg ...
	I0520 10:28:39.678883 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:28:39.697928 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:39.697956 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:28:39.698012 1469715 out.go:239] X Problems detected in kubelet:
	W0520 10:28:39.698025 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.698032 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.698044 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.698053 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:39.698060 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:39.698072 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:28:49.699283 1469715 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0520 10:28:49.706842 1469715 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0520 10:28:49.707926 1469715 api_server.go:141] control plane version: v1.30.1
	I0520 10:28:49.707950 1469715 api_server.go:131] duration metric: took 11.108865148s to wait for apiserver health ...
	I0520 10:28:49.707960 1469715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 10:28:49.707979 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:28:49.708046 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:28:49.747670 1469715 cri.go:89] found id: "733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:49.747693 1469715 cri.go:89] found id: ""
	I0520 10:28:49.747701 1469715 logs.go:276] 1 containers: [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b]
	I0520 10:28:49.747762 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.751183 1469715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:28:49.751262 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:28:49.800402 1469715 cri.go:89] found id: "afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:49.800421 1469715 cri.go:89] found id: ""
	I0520 10:28:49.800429 1469715 logs.go:276] 1 containers: [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3]
	I0520 10:28:49.800491 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.804226 1469715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:28:49.804295 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:28:49.849387 1469715 cri.go:89] found id: "a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:49.849413 1469715 cri.go:89] found id: ""
	I0520 10:28:49.849421 1469715 logs.go:276] 1 containers: [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b]
	I0520 10:28:49.849497 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.853716 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:28:49.853826 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:28:49.897908 1469715 cri.go:89] found id: "cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:49.897932 1469715 cri.go:89] found id: ""
	I0520 10:28:49.897940 1469715 logs.go:276] 1 containers: [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea]
	I0520 10:28:49.897996 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.901331 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:28:49.901463 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:28:49.939367 1469715 cri.go:89] found id: "8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:49.939389 1469715 cri.go:89] found id: ""
	I0520 10:28:49.939397 1469715 logs.go:276] 1 containers: [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65]
	I0520 10:28:49.939473 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.942930 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:28:49.943012 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:28:49.980709 1469715 cri.go:89] found id: "417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:49.980734 1469715 cri.go:89] found id: ""
	I0520 10:28:49.980743 1469715 logs.go:276] 1 containers: [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77]
	I0520 10:28:49.980801 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.985356 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:28:49.985429 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:28:50.033391 1469715 cri.go:89] found id: "eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:50.033415 1469715 cri.go:89] found id: ""
	I0520 10:28:50.033425 1469715 logs.go:276] 1 containers: [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da]
	I0520 10:28:50.033508 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:50.037803 1469715 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:28:50.037851 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:28:50.178515 1469715 logs.go:123] Gathering logs for kube-apiserver [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b] ...
	I0520 10:28:50.178599 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:50.261689 1469715 logs.go:123] Gathering logs for etcd [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3] ...
	I0520 10:28:50.261764 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:50.329567 1469715 logs.go:123] Gathering logs for coredns [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b] ...
	I0520 10:28:50.329602 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:50.366482 1469715 logs.go:123] Gathering logs for kube-scheduler [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea] ...
	I0520 10:28:50.366513 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:50.413015 1469715 logs.go:123] Gathering logs for kindnet [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da] ...
	I0520 10:28:50.413046 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:50.456114 1469715 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:28:50.456140 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:28:50.550222 1469715 logs.go:123] Gathering logs for kubelet ...
	I0520 10:28:50.550307 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:28:50.599723 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.599934 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.602164 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.602374 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:50.641841 1469715 logs.go:123] Gathering logs for container status ...
	I0520 10:28:50.641874 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:28:50.689806 1469715 logs.go:123] Gathering logs for kube-proxy [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65] ...
	I0520 10:28:50.689838 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:50.727265 1469715 logs.go:123] Gathering logs for kube-controller-manager [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77] ...
	I0520 10:28:50.727300 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:50.794084 1469715 logs.go:123] Gathering logs for dmesg ...
	I0520 10:28:50.794122 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:28:50.813350 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:50.813382 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:28:50.813430 1469715 out.go:239] X Problems detected in kubelet:
	W0520 10:28:50.813444 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.813451 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.813460 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.813471 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:50.813477 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:50.813483 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:29:00.832565 1469715 system_pods.go:59] 18 kube-system pods found
	I0520 10:29:00.832603 1469715 system_pods.go:61] "coredns-7db6d8ff4d-b9xf7" [6e2fbc19-14ad-48d3-9d75-cada8ca050cd] Running
	I0520 10:29:00.832610 1469715 system_pods.go:61] "csi-hostpath-attacher-0" [74986f6c-64f5-4633-91fa-e5f741e5a472] Running
	I0520 10:29:00.832615 1469715 system_pods.go:61] "csi-hostpath-resizer-0" [f101e109-8cf4-45fb-88bd-fb4f2c9b864b] Running
	I0520 10:29:00.832639 1469715 system_pods.go:61] "csi-hostpathplugin-29tk8" [7d24b514-c559-45cc-bf58-48fc804aba64] Running
	I0520 10:29:00.832650 1469715 system_pods.go:61] "etcd-addons-091599" [578d79c2-858b-40c4-b5dc-323248721eb9] Running
	I0520 10:29:00.832656 1469715 system_pods.go:61] "kindnet-46ck5" [081ed86e-80d3-418e-96ee-eed890edcef1] Running
	I0520 10:29:00.832663 1469715 system_pods.go:61] "kube-apiserver-addons-091599" [f950a9c9-5f3b-4719-96f4-c3cc19a9244c] Running
	I0520 10:29:00.832667 1469715 system_pods.go:61] "kube-controller-manager-addons-091599" [7c254f12-fd04-41dc-a93f-8bb4450ddfc1] Running
	I0520 10:29:00.832679 1469715 system_pods.go:61] "kube-ingress-dns-minikube" [5165966d-7976-41d5-aeda-453818f053d6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0520 10:29:00.832688 1469715 system_pods.go:61] "kube-proxy-mxn9s" [62fa87b1-b9ee-49b2-bdf5-c453888491fe] Running
	I0520 10:29:00.832693 1469715 system_pods.go:61] "kube-scheduler-addons-091599" [e2982c39-66fa-471e-8738-fa5b24fa2577] Running
	I0520 10:29:00.832696 1469715 system_pods.go:61] "metrics-server-c59844bb4-2952v" [b05bfa4c-b71e-4ba3-82ec-ef3604433ba9] Running
	I0520 10:29:00.832700 1469715 system_pods.go:61] "nvidia-device-plugin-daemonset-xt86b" [e96a5492-ba66-4969-aaa2-03c1ea00e071] Running
	I0520 10:29:00.832726 1469715 system_pods.go:61] "registry-c9mld" [2c38d8b7-c7e2-4b49-a2c6-ce2a95367d53] Running
	I0520 10:29:00.832745 1469715 system_pods.go:61] "registry-proxy-2mv7g" [4c0da18b-a7b2-46aa-9e52-c5273f77fb67] Running
	I0520 10:29:00.832750 1469715 system_pods.go:61] "snapshot-controller-745499f584-b2m64" [1b65aa38-6b40-4c44-b1ea-f996d39e17d5] Running
	I0520 10:29:00.832753 1469715 system_pods.go:61] "snapshot-controller-745499f584-wsxwq" [962657a7-4a31-4fa2-bd12-e9ed25e89f37] Running
	I0520 10:29:00.832757 1469715 system_pods.go:61] "storage-provisioner" [f3bdda63-6ec2-4c3b-a250-090f43416d4d] Running
	I0520 10:29:00.832764 1469715 system_pods.go:74] duration metric: took 11.12479785s to wait for pod list to return data ...
	I0520 10:29:00.832776 1469715 default_sa.go:34] waiting for default service account to be created ...
	I0520 10:29:00.835262 1469715 default_sa.go:45] found service account: "default"
	I0520 10:29:00.835295 1469715 default_sa.go:55] duration metric: took 2.511771ms for default service account to be created ...
	I0520 10:29:00.835306 1469715 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 10:29:00.845848 1469715 system_pods.go:86] 18 kube-system pods found
	I0520 10:29:00.845886 1469715 system_pods.go:89] "coredns-7db6d8ff4d-b9xf7" [6e2fbc19-14ad-48d3-9d75-cada8ca050cd] Running
	I0520 10:29:00.845893 1469715 system_pods.go:89] "csi-hostpath-attacher-0" [74986f6c-64f5-4633-91fa-e5f741e5a472] Running
	I0520 10:29:00.845898 1469715 system_pods.go:89] "csi-hostpath-resizer-0" [f101e109-8cf4-45fb-88bd-fb4f2c9b864b] Running
	I0520 10:29:00.845902 1469715 system_pods.go:89] "csi-hostpathplugin-29tk8" [7d24b514-c559-45cc-bf58-48fc804aba64] Running
	I0520 10:29:00.845906 1469715 system_pods.go:89] "etcd-addons-091599" [578d79c2-858b-40c4-b5dc-323248721eb9] Running
	I0520 10:29:00.845910 1469715 system_pods.go:89] "kindnet-46ck5" [081ed86e-80d3-418e-96ee-eed890edcef1] Running
	I0520 10:29:00.845914 1469715 system_pods.go:89] "kube-apiserver-addons-091599" [f950a9c9-5f3b-4719-96f4-c3cc19a9244c] Running
	I0520 10:29:00.845918 1469715 system_pods.go:89] "kube-controller-manager-addons-091599" [7c254f12-fd04-41dc-a93f-8bb4450ddfc1] Running
	I0520 10:29:00.845929 1469715 system_pods.go:89] "kube-ingress-dns-minikube" [5165966d-7976-41d5-aeda-453818f053d6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0520 10:29:00.845938 1469715 system_pods.go:89] "kube-proxy-mxn9s" [62fa87b1-b9ee-49b2-bdf5-c453888491fe] Running
	I0520 10:29:00.845952 1469715 system_pods.go:89] "kube-scheduler-addons-091599" [e2982c39-66fa-471e-8738-fa5b24fa2577] Running
	I0520 10:29:00.845956 1469715 system_pods.go:89] "metrics-server-c59844bb4-2952v" [b05bfa4c-b71e-4ba3-82ec-ef3604433ba9] Running
	I0520 10:29:00.845960 1469715 system_pods.go:89] "nvidia-device-plugin-daemonset-xt86b" [e96a5492-ba66-4969-aaa2-03c1ea00e071] Running
	I0520 10:29:00.845968 1469715 system_pods.go:89] "registry-c9mld" [2c38d8b7-c7e2-4b49-a2c6-ce2a95367d53] Running
	I0520 10:29:00.845972 1469715 system_pods.go:89] "registry-proxy-2mv7g" [4c0da18b-a7b2-46aa-9e52-c5273f77fb67] Running
	I0520 10:29:00.845976 1469715 system_pods.go:89] "snapshot-controller-745499f584-b2m64" [1b65aa38-6b40-4c44-b1ea-f996d39e17d5] Running
	I0520 10:29:00.845983 1469715 system_pods.go:89] "snapshot-controller-745499f584-wsxwq" [962657a7-4a31-4fa2-bd12-e9ed25e89f37] Running
	I0520 10:29:00.845987 1469715 system_pods.go:89] "storage-provisioner" [f3bdda63-6ec2-4c3b-a250-090f43416d4d] Running
	I0520 10:29:00.846002 1469715 system_pods.go:126] duration metric: took 10.671376ms to wait for k8s-apps to be running ...
	I0520 10:29:00.846010 1469715 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 10:29:00.846075 1469715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:29:00.858741 1469715 system_svc.go:56] duration metric: took 12.721914ms WaitForService to wait for kubelet
	I0520 10:29:00.858772 1469715 kubeadm.go:576] duration metric: took 2m49.6276108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:29:00.858793 1469715 node_conditions.go:102] verifying NodePressure condition ...
	I0520 10:29:00.861821 1469715 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0520 10:29:00.861854 1469715 node_conditions.go:123] node cpu capacity is 2
	I0520 10:29:00.861867 1469715 node_conditions.go:105] duration metric: took 3.069183ms to run NodePressure ...
	I0520 10:29:00.861879 1469715 start.go:240] waiting for startup goroutines ...
	I0520 10:29:00.861887 1469715 start.go:245] waiting for cluster config update ...
	I0520 10:29:00.861912 1469715 start.go:254] writing updated cluster config ...
	I0520 10:29:00.862220 1469715 ssh_runner.go:195] Run: rm -f paused
	I0520 10:29:01.190560 1469715 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 10:29:01.194703 1469715 out.go:177] * Done! kubectl is now configured to use "addons-091599" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 10:33:05 addons-091599 conmon[4784]: conmon 3adc3ddeba70384f8937 <ninfo>: container 4795 exited with status 137
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.953682531Z" level=info msg="Stopped container 3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19: ingress-nginx/ingress-nginx-controller-768f948f8f-pr8hb/controller" id=ae483702-2c52-4933-a310-3355497be608 name=/runtime.v1.RuntimeService/StopContainer
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.954339879Z" level=info msg="Stopping pod sandbox: d3c136d45f8bd199ca3c1692a11ad04c412ec0914692ecb165d3bf00d08f21ae" id=dc781703-be60-418a-9642-61ebbd625ecb name=/runtime.v1.RuntimeService/StopPodSandbox
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.958784661Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-42OQR3EOH27XEL6F - [0:0]\n:KUBE-HP-MRL77FSLYWZHB7YT - [0:0]\n-X KUBE-HP-42OQR3EOH27XEL6F\n-X KUBE-HP-MRL77FSLYWZHB7YT\nCOMMIT\n"
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.960250573Z" level=info msg="Closing host port tcp:80"
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.960302880Z" level=info msg="Closing host port tcp:443"
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.961757124Z" level=info msg="Host port tcp:80 does not have an open socket"
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.961800471Z" level=info msg="Host port tcp:443 does not have an open socket"
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.962016409Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-768f948f8f-pr8hb Namespace:ingress-nginx ID:d3c136d45f8bd199ca3c1692a11ad04c412ec0914692ecb165d3bf00d08f21ae UID:3fd1feff-9908-4e6b-a9b5-f9dd37a20987 NetNS:/var/run/netns/aca93044-62e3-4796-843c-f394c4553aac Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.962179064Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-768f948f8f-pr8hb from CNI network \"kindnet\" (type=ptp)"
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.982148578Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=ffa16ae7-1118-4005-912d-de2423ae695e name=/runtime.v1.ImageService/ImageStatus
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.982376298Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=ffa16ae7-1118-4005-912d-de2423ae695e name=/runtime.v1.ImageService/ImageStatus
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.983667445Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=dac0c66c-a5d3-45ab-9607-aae051a5adcb name=/runtime.v1.ImageService/ImageStatus
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.983854509Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=dac0c66c-a5d3-45ab-9607-aae051a5adcb name=/runtime.v1.ImageService/ImageStatus
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.985313373Z" level=info msg="Creating container: default/hello-world-app-86c47465fc-6r6ll/hello-world-app" id=6b79b2d3-2474-413f-857d-2560c5510347 name=/runtime.v1.RuntimeService/CreateContainer
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.985409010Z" level=warning msg="Allowed annotations are specified for workload []"
	May 20 10:33:05 addons-091599 crio[906]: time="2024-05-20 10:33:05.997136469Z" level=info msg="Stopped pod sandbox: d3c136d45f8bd199ca3c1692a11ad04c412ec0914692ecb165d3bf00d08f21ae" id=dc781703-be60-418a-9642-61ebbd625ecb name=/runtime.v1.RuntimeService/StopPodSandbox
	May 20 10:33:06 addons-091599 crio[906]: time="2024-05-20 10:33:06.066822041Z" level=info msg="Created container 86ca285bf3cceca125d0a884cc4e982648f339da52313e55aca46d14a2fddd21: default/hello-world-app-86c47465fc-6r6ll/hello-world-app" id=6b79b2d3-2474-413f-857d-2560c5510347 name=/runtime.v1.RuntimeService/CreateContainer
	May 20 10:33:06 addons-091599 crio[906]: time="2024-05-20 10:33:06.067777326Z" level=info msg="Starting container: 86ca285bf3cceca125d0a884cc4e982648f339da52313e55aca46d14a2fddd21" id=a52ee6b3-6f60-4865-a716-3e12d804e9cc name=/runtime.v1.RuntimeService/StartContainer
	May 20 10:33:06 addons-091599 crio[906]: time="2024-05-20 10:33:06.079400065Z" level=info msg="Started container" PID=8523 containerID=86ca285bf3cceca125d0a884cc4e982648f339da52313e55aca46d14a2fddd21 description=default/hello-world-app-86c47465fc-6r6ll/hello-world-app id=a52ee6b3-6f60-4865-a716-3e12d804e9cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=89c71ba72dc6f4f0d7037eca4b4da6b3e9f384838ea309a934be19a18dec3590
	May 20 10:33:06 addons-091599 conmon[8512]: conmon 86ca285bf3cceca125d0 <ninfo>: container 8523 exited with status 1
	May 20 10:33:06 addons-091599 crio[906]: time="2024-05-20 10:33:06.134922836Z" level=info msg="Removing container: 3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19" id=6a52b4db-ee93-4f4d-843e-6db6d1c0e989 name=/runtime.v1.RuntimeService/RemoveContainer
	May 20 10:33:06 addons-091599 crio[906]: time="2024-05-20 10:33:06.155550961Z" level=info msg="Removed container 3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19: ingress-nginx/ingress-nginx-controller-768f948f8f-pr8hb/controller" id=6a52b4db-ee93-4f4d-843e-6db6d1c0e989 name=/runtime.v1.RuntimeService/RemoveContainer
	May 20 10:33:06 addons-091599 crio[906]: time="2024-05-20 10:33:06.157835653Z" level=info msg="Removing container: 72510677f74a53da1899eaf2b9dfb041eeddfbd54cd400cb563fec898352790c" id=89c12870-adb2-4b63-aa92-13afa50e79ef name=/runtime.v1.RuntimeService/RemoveContainer
	May 20 10:33:06 addons-091599 crio[906]: time="2024-05-20 10:33:06.178536244Z" level=info msg="Removed container 72510677f74a53da1899eaf2b9dfb041eeddfbd54cd400cb563fec898352790c: default/hello-world-app-86c47465fc-6r6ll/hello-world-app" id=89c12870-adb2-4b63-aa92-13afa50e79ef name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	86ca285bf3cce       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             5 seconds ago       Exited              hello-world-app           2                   89c71ba72dc6f       hello-world-app-86c47465fc-6r6ll
	8453a37ffff10       docker.io/library/nginx@sha256:05325b3a32db871dc396a859d9a9609d75f50d2f7ad12194f9f3a550111bdcaa                              2 minutes ago       Running             nginx                     0                   639c326bab448       nginx
	48ed74134a932       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                        4 minutes ago       Running             headlamp                  0                   7054d05485d2a       headlamp-68456f997b-kzpsq
	48a7b240a2508       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 5 minutes ago       Running             gcp-auth                  0                   d45942f337131       gcp-auth-5db96cd9b4-tpqqv
	7ba31c6caa3ea       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              patch                     0                   c3a2a56291408       ingress-nginx-admission-patch-dptrv
	02f9a1a4f7601       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server            0                   3e7dbb37eb418       metrics-server-c59844bb4-2952v
	ba27e2e0ccc2d       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              6 minutes ago       Running             yakd                      0                   cd9fc24a11df7       yakd-dashboard-5ddbf7d777-zk8ph
	b16e813b43191       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   6 minutes ago       Exited              create                    0                   0c56eefc10118       ingress-nginx-admission-create-xrmnh
	a4f3accb83dd9       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             6 minutes ago       Running             coredns                   0                   25e1db186e60d       coredns-7db6d8ff4d-b9xf7
	0245d6608194b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago       Running             storage-provisioner       0                   edbda11fa8f1b       storage-provisioner
	8c5f80237ca50       05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee                                                             6 minutes ago       Running             kube-proxy                0                   30b7d1c053dec       kube-proxy-mxn9s
	eacd599cd704c       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                             6 minutes ago       Running             kindnet-cni               0                   aa5db3edaf56f       kindnet-46ck5
	afcbbf4b5b7a4       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             7 minutes ago       Running             etcd                      0                   f9cef73b37f03       etcd-addons-091599
	cd0f27c747443       163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a                                                             7 minutes ago       Running             kube-scheduler            0                   c23d3b94371cd       kube-scheduler-addons-091599
	733d7717e335e       988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee                                                             7 minutes ago       Running             kube-apiserver            0                   d7ef59cfc9fcc       kube-apiserver-addons-091599
	417ff80330879       234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4                                                             7 minutes ago       Running             kube-controller-manager   0                   926131727e20d       kube-controller-manager-addons-091599
	
	
	==> coredns [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b] <==
	[INFO] 10.244.0.20:51206 - 35368 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048573s
	[INFO] 10.244.0.20:51206 - 50867 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062259s
	[INFO] 10.244.0.20:35306 - 24989 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002379122s
	[INFO] 10.244.0.20:51206 - 59263 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001380375s
	[INFO] 10.244.0.20:51206 - 54430 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001102638s
	[INFO] 10.244.0.20:35306 - 62106 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000590288s
	[INFO] 10.244.0.20:51206 - 62475 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051002s
	[INFO] 10.244.0.20:36704 - 56449 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000119251s
	[INFO] 10.244.0.20:52848 - 13990 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000171738s
	[INFO] 10.244.0.20:52848 - 64641 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00005544s
	[INFO] 10.244.0.20:36704 - 27944 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065943s
	[INFO] 10.244.0.20:36704 - 55226 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054973s
	[INFO] 10.244.0.20:52848 - 824 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046719s
	[INFO] 10.244.0.20:52848 - 37686 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046924s
	[INFO] 10.244.0.20:52848 - 54635 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041451s
	[INFO] 10.244.0.20:52848 - 37699 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050141s
	[INFO] 10.244.0.20:36704 - 29272 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000342499s
	[INFO] 10.244.0.20:36704 - 24939 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000191011s
	[INFO] 10.244.0.20:52848 - 53833 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001304061s
	[INFO] 10.244.0.20:36704 - 61187 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067002s
	[INFO] 10.244.0.20:52848 - 34392 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001235214s
	[INFO] 10.244.0.20:36704 - 63626 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001428752s
	[INFO] 10.244.0.20:52848 - 16330 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055432s
	[INFO] 10.244.0.20:36704 - 21264 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001107758s
	[INFO] 10.244.0.20:36704 - 51671 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050288s
	
	
	==> describe nodes <==
	Name:               addons-091599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-091599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=addons-091599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T10_25_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-091599
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:25:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-091599
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:33:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:33:06 +0000   Mon, 20 May 2024 10:25:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:33:06 +0000   Mon, 20 May 2024 10:25:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:33:06 +0000   Mon, 20 May 2024 10:25:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:33:06 +0000   Mon, 20 May 2024 10:26:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-091599
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 51b9f74756bb4cda9ea81c779a1d1fc0
	  System UUID:                4e008f60-cdc7-4895-a474-c1c9872af671
	  Boot ID:                    df9684e8-d429-41b3-8a9f-ef96b9c9133b
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-6r6ll         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-5db96cd9b4-tpqqv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  headlamp                    headlamp-68456f997b-kzpsq                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 coredns-7db6d8ff4d-b9xf7                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m58s
	  kube-system                 etcd-addons-091599                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m13s
	  kube-system                 kindnet-46ck5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m59s
	  kube-system                 kube-apiserver-addons-091599             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 kube-controller-manager-addons-091599    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 kube-proxy-mxn9s                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  kube-system                 kube-scheduler-addons-091599             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 metrics-server-c59844bb4-2952v           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m55s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-zk8ph          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     6m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m20s (x8 over 7m20s)  kubelet          Node addons-091599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m20s (x8 over 7m20s)  kubelet          Node addons-091599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m20s (x8 over 7m20s)  kubelet          Node addons-091599 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m13s                  kubelet          Node addons-091599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s                  kubelet          Node addons-091599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s                  kubelet          Node addons-091599 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m1s                   node-controller  Node addons-091599 event: Registered Node addons-091599 in Controller
	  Normal  NodeReady                6m25s                  kubelet          Node addons-091599 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000971] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=0000000088505b0d
	[  +0.001079] FS-Cache: N-key=[8] '9a823b0000000000'
	[  +0.005246] FS-Cache: Duplicate cookie detected
	[  +0.000764] FS-Cache: O-cookie c=0000016e [p=0000016b fl=226 nc=0 na=1]
	[  +0.001034] FS-Cache: O-cookie d=00000000865130fc{9p.inode} n=00000000c223b28b
	[  +0.001098] FS-Cache: O-key=[8] '9a823b0000000000'
	[  +0.000735] FS-Cache: N-cookie c=00000175 [p=0000016b fl=2 nc=0 na=1]
	[  +0.001050] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=00000000d2aa7710
	[  +0.001064] FS-Cache: N-key=[8] '9a823b0000000000'
	[  +2.844064] FS-Cache: Duplicate cookie detected
	[  +0.000773] FS-Cache: O-cookie c=0000016c [p=0000016b fl=226 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=00000000865130fc{9p.inode} n=00000000e950dce3
	[  +0.001203] FS-Cache: O-key=[8] '99823b0000000000'
	[  +0.000830] FS-Cache: N-cookie c=00000177 [p=0000016b fl=2 nc=0 na=1]
	[  +0.001028] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=0000000088505b0d
	[  +0.001149] FS-Cache: N-key=[8] '99823b0000000000'
	[  +0.273462] FS-Cache: Duplicate cookie detected
	[  +0.000786] FS-Cache: O-cookie c=00000171 [p=0000016b fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=00000000865130fc{9p.inode} n=00000000648a8f54
	[  +0.001088] FS-Cache: O-key=[8] 'a1823b0000000000'
	[  +0.000816] FS-Cache: N-cookie c=00000178 [p=0000016b fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=00000000ccc9761f
	[  +0.001097] FS-Cache: N-key=[8] 'a1823b0000000000'
	[May20 09:58] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[  +0.555832] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	
	
	==> etcd [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3] <==
	{"level":"info","ts":"2024-05-20T10:26:13.910357Z","caller":"traceutil/trace.go:171","msg":"trace[64789948] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"118.038539ms","start":"2024-05-20T10:26:13.792282Z","end":"2024-05-20T10:26:13.910321Z","steps":["trace[64789948] 'process raft request'  (duration: 14.304313ms)","trace[64789948] 'compare'  (duration: 41.838836ms)","trace[64789948] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/pods/kube-system/kube-proxy-mxn9s; req_size:3408; } (duration: 59.653118ms)"],"step_count":3}
	{"level":"info","ts":"2024-05-20T10:26:14.506258Z","caller":"traceutil/trace.go:171","msg":"trace[1369869291] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"174.868623ms","start":"2024-05-20T10:26:14.33137Z","end":"2024-05-20T10:26:14.506238Z","steps":["trace[1369869291] 'process raft request'  (duration: 106.190656ms)","trace[1369869291] 'compare'  (duration: 65.326321ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:26:15.148786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.294141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-mxn9s\" ","response":"range_response_count:1 size:3426"}
	{"level":"info","ts":"2024-05-20T10:26:15.156623Z","caller":"traceutil/trace.go:171","msg":"trace[448250679] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-mxn9s; range_end:; response_count:1; response_revision:389; }","duration":"130.110541ms","start":"2024-05-20T10:26:15.026465Z","end":"2024-05-20T10:26:15.156575Z","steps":["trace[448250679] 'agreement among raft nodes before linearized reading'  (duration: 22.882296ms)","trace[448250679] 'range keys from in-memory index tree'  (duration: 99.362057ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:26:15.21604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.338541ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128029299912726183 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3057 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-05-20T10:26:15.269628Z","caller":"traceutil/trace.go:171","msg":"trace[1872530540] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"240.839942ms","start":"2024-05-20T10:26:15.028701Z","end":"2024-05-20T10:26:15.269541Z","steps":["trace[1872530540] 'process raft request'  (duration: 41.943425ms)","trace[1872530540] 'store kv pair into bolt db' {req_type:put; key:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; req_size:3125; } (duration: 86.915681ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T10:26:15.291469Z","caller":"traceutil/trace.go:171","msg":"trace[489295856] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"241.994771ms","start":"2024-05-20T10:26:15.049459Z","end":"2024-05-20T10:26:15.291454Z","steps":["trace[489295856] 'process raft request'  (duration: 220.052262ms)","trace[489295856] 'compare'  (duration: 21.478881ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T10:26:15.297129Z","caller":"traceutil/trace.go:171","msg":"trace[199199671] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"247.603115ms","start":"2024-05-20T10:26:15.049507Z","end":"2024-05-20T10:26:15.29711Z","steps":["trace[199199671] 'process raft request'  (duration: 241.566056ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:15.297361Z","caller":"traceutil/trace.go:171","msg":"trace[2005051907] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"227.377695ms","start":"2024-05-20T10:26:15.069977Z","end":"2024-05-20T10:26:15.297354Z","steps":["trace[2005051907] 'process raft request'  (duration: 221.12711ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:15.297453Z","caller":"traceutil/trace.go:171","msg":"trace[1248373236] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"227.080364ms","start":"2024-05-20T10:26:15.070366Z","end":"2024-05-20T10:26:15.297447Z","steps":["trace[1248373236] 'process raft request'  (duration: 220.770753ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:15.297545Z","caller":"traceutil/trace.go:171","msg":"trace[1744556478] linearizableReadLoop","detail":"{readStateIndex:404; appliedIndex:400; }","duration":"227.169823ms","start":"2024-05-20T10:26:15.070351Z","end":"2024-05-20T10:26:15.297521Z","steps":["trace[1744556478] 'read index received'  (duration: 298.496µs)","trace[1744556478] 'applied index is now lower than readState.Index'  (duration: 226.870736ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:26:15.301846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.487331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T10:26:15.301975Z","caller":"traceutil/trace.go:171","msg":"trace[706956391] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:395; }","duration":"231.62679ms","start":"2024-05-20T10:26:15.070335Z","end":"2024-05-20T10:26:15.301962Z","steps":["trace[706956391] 'agreement among raft nodes before linearized reading'  (duration: 231.44866ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:16.172439Z","caller":"traceutil/trace.go:171","msg":"trace[479206389] linearizableReadLoop","detail":"{readStateIndex:481; appliedIndex:480; }","duration":"118.652392ms","start":"2024-05-20T10:26:16.053751Z","end":"2024-05-20T10:26:16.172403Z","steps":["trace[479206389] 'read index received'  (duration: 8.647463ms)","trace[479206389] 'applied index is now lower than readState.Index'  (duration: 109.738203ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T10:26:16.172649Z","caller":"traceutil/trace.go:171","msg":"trace[338265923] transaction","detail":"{read_only:false; response_revision:471; number_of_response:1; }","duration":"112.882404ms","start":"2024-05-20T10:26:16.05927Z","end":"2024-05-20T10:26:16.172152Z","steps":["trace[338265923] 'process raft request'  (duration: 91.466955ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:16.172796Z","caller":"traceutil/trace.go:171","msg":"trace[338464100] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"108.23194ms","start":"2024-05-20T10:26:16.064557Z","end":"2024-05-20T10:26:16.172789Z","steps":["trace[338464100] 'process raft request'  (duration: 86.251548ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:16.173089Z","caller":"traceutil/trace.go:171","msg":"trace[1676262612] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"103.616314ms","start":"2024-05-20T10:26:16.058731Z","end":"2024-05-20T10:26:16.162347Z","steps":["trace[1676262612] 'process raft request'  (duration: 18.139173ms)","trace[1676262612] 'attach lease to kv pair' {req_type:put; key:/registry/events/kube-system/metrics-server.17d12b835118af6a; req_size:704; } (duration: 73.765295ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:26:16.185145Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.013171ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-05-20T10:26:16.185246Z","caller":"traceutil/trace.go:171","msg":"trace[159244787] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:478; }","duration":"126.123323ms","start":"2024-05-20T10:26:16.059111Z","end":"2024-05-20T10:26:16.185234Z","steps":["trace[159244787] 'agreement among raft nodes before linearized reading'  (duration: 125.915147ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:26:16.186341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.449455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-05-20T10:26:16.173387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.624391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"warn","ts":"2024-05-20T10:26:16.1894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.237923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-05-20T10:26:16.189452Z","caller":"traceutil/trace.go:171","msg":"trace[414579672] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:478; }","duration":"130.298491ms","start":"2024-05-20T10:26:16.059144Z","end":"2024-05-20T10:26:16.189443Z","steps":["trace[414579672] 'agreement among raft nodes before linearized reading'  (duration: 130.154002ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:16.18943Z","caller":"traceutil/trace.go:171","msg":"trace[2120978683] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:473; }","duration":"135.671573ms","start":"2024-05-20T10:26:16.053744Z","end":"2024-05-20T10:26:16.189416Z","steps":["trace[2120978683] 'agreement among raft nodes before linearized reading'  (duration: 118.396356ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:16.190482Z","caller":"traceutil/trace.go:171","msg":"trace[907544207] range","detail":"{range_begin:/registry/services/specs/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:478; }","duration":"124.592895ms","start":"2024-05-20T10:26:16.065878Z","end":"2024-05-20T10:26:16.190471Z","steps":["trace[907544207] 'agreement among raft nodes before linearized reading'  (duration: 120.439864ms)"],"step_count":1}
	
	
	==> gcp-auth [48a7b240a25082dbb7e9990a41a8900ff0b7b63cd49dc0701beccbb9ee525c07] <==
	2024/05/20 10:27:33 GCP Auth Webhook started!
	2024/05/20 10:29:02 Ready to marshal response ...
	2024/05/20 10:29:02 Ready to write response ...
	2024/05/20 10:29:02 Ready to marshal response ...
	2024/05/20 10:29:02 Ready to write response ...
	2024/05/20 10:29:02 Ready to marshal response ...
	2024/05/20 10:29:02 Ready to write response ...
	2024/05/20 10:29:12 Ready to marshal response ...
	2024/05/20 10:29:12 Ready to write response ...
	2024/05/20 10:29:17 Ready to marshal response ...
	2024/05/20 10:29:17 Ready to write response ...
	2024/05/20 10:29:17 Ready to marshal response ...
	2024/05/20 10:29:17 Ready to write response ...
	2024/05/20 10:29:25 Ready to marshal response ...
	2024/05/20 10:29:25 Ready to write response ...
	2024/05/20 10:29:36 Ready to marshal response ...
	2024/05/20 10:29:36 Ready to write response ...
	2024/05/20 10:30:08 Ready to marshal response ...
	2024/05/20 10:30:08 Ready to write response ...
	2024/05/20 10:30:24 Ready to marshal response ...
	2024/05/20 10:30:24 Ready to write response ...
	2024/05/20 10:32:45 Ready to marshal response ...
	2024/05/20 10:32:45 Ready to write response ...
	
	
	==> kernel <==
	 10:33:11 up 1 day, 18:15,  0 users,  load average: 0.15, 1.14, 1.99
	Linux addons-091599 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da] <==
	I0520 10:31:06.475937       1 main.go:227] handling current node
	I0520 10:31:16.480347       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:31:16.480376       1 main.go:227] handling current node
	I0520 10:31:26.490895       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:31:26.490920       1 main.go:227] handling current node
	I0520 10:31:36.501922       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:31:36.501953       1 main.go:227] handling current node
	I0520 10:31:46.516955       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:31:46.516994       1 main.go:227] handling current node
	I0520 10:31:56.520655       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:31:56.520684       1 main.go:227] handling current node
	I0520 10:32:06.528809       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:32:06.528837       1 main.go:227] handling current node
	I0520 10:32:16.532810       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:32:16.532841       1 main.go:227] handling current node
	I0520 10:32:26.536779       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:32:26.536808       1 main.go:227] handling current node
	I0520 10:32:36.547318       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:32:36.547349       1 main.go:227] handling current node
	I0520 10:32:46.559470       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:32:46.559496       1 main.go:227] handling current node
	I0520 10:32:56.563494       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:32:56.563524       1 main.go:227] handling current node
	I0520 10:33:06.574121       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:33:06.574151       1 main.go:227] handling current node
	
	
	==> kube-apiserver [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b] <==
	E0520 10:28:27.141538       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.204.129:443: connect: connection refused
	E0520 10:28:27.151253       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.204.129:443: connect: connection refused
	E0520 10:28:27.175742       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.204.129:443: connect: connection refused
	I0520 10:28:27.280565       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0520 10:29:02.089328       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.6.185"}
	E0520 10:29:41.191006       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0520 10:29:47.856267       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0520 10:30:15.045809       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0520 10:30:16.084023       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0520 10:30:20.321327       1 watch.go:250] http2: stream closed
	I0520 10:30:23.364467       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:30:23.364598       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:30:23.395213       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:30:23.395278       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:30:23.438027       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:30:23.438078       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:30:23.467459       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:30:23.467502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:30:24.066968       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0520 10:30:24.365203       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.46.171"}
	W0520 10:30:24.440350       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0520 10:30:24.468445       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0520 10:30:24.501291       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0520 10:32:45.632340       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.254.96"}
	E0520 10:33:02.158270       1 watch.go:250] http2: stream closed
	
	
	==> kube-controller-manager [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77] <==
	W0520 10:31:56.688697       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:31:56.688736       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:32:14.003039       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:32:14.003188       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:32:20.622211       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:32:20.622334       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:32:34.253218       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:32:34.253271       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:32:42.298352       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:32:42.298491       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 10:32:45.436111       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="47.738322ms"
	I0520 10:32:45.456514       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="20.356435ms"
	I0520 10:32:45.456580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="29.119µs"
	I0520 10:32:45.473332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="39.113µs"
	I0520 10:32:49.115593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="47.047µs"
	I0520 10:32:50.113013       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="42.305µs"
	I0520 10:32:51.105199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="44.102µs"
	W0520 10:32:55.013822       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:32:55.013876       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:32:55.111882       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:32:55.111920       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 10:33:02.781836       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0520 10:33:02.784676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="7.581µs"
	I0520 10:33:02.789944       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0520 10:33:06.159156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="141.576µs"
	
	
	==> kube-proxy [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65] <==
	I0520 10:26:17.239321       1 server_linux.go:69] "Using iptables proxy"
	I0520 10:26:17.300199       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0520 10:26:17.593238       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0520 10:26:17.593365       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:26:17.598117       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0520 10:26:17.598330       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0520 10:26:17.598411       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:26:17.598661       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:26:17.598882       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:26:17.599801       1 config.go:192] "Starting service config controller"
	I0520 10:26:17.599861       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:26:17.599912       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:26:17.599940       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:26:17.600441       1 config.go:319] "Starting node config controller"
	I0520 10:26:17.600495       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:26:17.700869       1 shared_informer.go:320] Caches are synced for node config
	I0520 10:26:17.700995       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:26:17.701024       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea] <==
	W0520 10:25:55.962558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 10:25:55.962573       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 10:25:55.962615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:25:55.962632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:25:55.962671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 10:25:55.962686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 10:25:55.962894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:25:55.962912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:25:55.962947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 10:25:55.962962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 10:25:55.962999       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:25:55.963014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:25:55.963051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 10:25:55.963066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 10:25:55.963107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 10:25:55.963122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:25:55.963160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 10:25:55.963174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 10:25:55.963209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 10:25:55.963222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 10:25:55.963304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 10:25:55.963430       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:25:55.963448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:25:55.963868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0520 10:25:57.354082       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 10:32:51 addons-091599 kubelet[1492]: E0520 10:32:51.092998    1492 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-6r6ll_default(b5b40db0-7b53-4d4f-821b-1b114daab242)\"" pod="default/hello-world-app-86c47465fc-6r6ll" podUID="b5b40db0-7b53-4d4f-821b-1b114daab242"
	May 20 10:32:51 addons-091599 kubelet[1492]: I0520 10:32:51.981280    1492 scope.go:117] "RemoveContainer" containerID="1d5b7dc01225e608a54abbf661edda0748b2cf85eb62672634d70803e38508ec"
	May 20 10:32:51 addons-091599 kubelet[1492]: E0520 10:32:51.981558    1492 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(5165966d-7976-41d5-aeda-453818f053d6)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="5165966d-7976-41d5-aeda-453818f053d6"
	May 20 10:33:01 addons-091599 kubelet[1492]: I0520 10:33:01.524348    1492 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf6kl\" (UniqueName: \"kubernetes.io/projected/5165966d-7976-41d5-aeda-453818f053d6-kube-api-access-vf6kl\") pod \"5165966d-7976-41d5-aeda-453818f053d6\" (UID: \"5165966d-7976-41d5-aeda-453818f053d6\") "
	May 20 10:33:01 addons-091599 kubelet[1492]: I0520 10:33:01.529272    1492 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5165966d-7976-41d5-aeda-453818f053d6-kube-api-access-vf6kl" (OuterVolumeSpecName: "kube-api-access-vf6kl") pod "5165966d-7976-41d5-aeda-453818f053d6" (UID: "5165966d-7976-41d5-aeda-453818f053d6"). InnerVolumeSpecName "kube-api-access-vf6kl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 10:33:01 addons-091599 kubelet[1492]: I0520 10:33:01.624949    1492 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vf6kl\" (UniqueName: \"kubernetes.io/projected/5165966d-7976-41d5-aeda-453818f053d6-kube-api-access-vf6kl\") on node \"addons-091599\" DevicePath \"\""
	May 20 10:33:02 addons-091599 kubelet[1492]: I0520 10:33:02.122583    1492 scope.go:117] "RemoveContainer" containerID="1d5b7dc01225e608a54abbf661edda0748b2cf85eb62672634d70803e38508ec"
	May 20 10:33:03 addons-091599 kubelet[1492]: I0520 10:33:03.983237    1492 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d2cbd28-8c4c-4697-bc55-f6ab9174be0d" path="/var/lib/kubelet/pods/4d2cbd28-8c4c-4697-bc55-f6ab9174be0d/volumes"
	May 20 10:33:03 addons-091599 kubelet[1492]: I0520 10:33:03.983665    1492 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5165966d-7976-41d5-aeda-453818f053d6" path="/var/lib/kubelet/pods/5165966d-7976-41d5-aeda-453818f053d6/volumes"
	May 20 10:33:03 addons-091599 kubelet[1492]: I0520 10:33:03.984095    1492 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="961f8d12-5a93-4c31-9fa3-4e881e01f9c7" path="/var/lib/kubelet/pods/961f8d12-5a93-4c31-9fa3-4e881e01f9c7/volumes"
	May 20 10:33:05 addons-091599 kubelet[1492]: I0520 10:33:05.981554    1492 scope.go:117] "RemoveContainer" containerID="72510677f74a53da1899eaf2b9dfb041eeddfbd54cd400cb563fec898352790c"
	May 20 10:33:06 addons-091599 kubelet[1492]: I0520 10:33:06.133273    1492 scope.go:117] "RemoveContainer" containerID="3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19"
	May 20 10:33:06 addons-091599 kubelet[1492]: I0520 10:33:06.136162    1492 scope.go:117] "RemoveContainer" containerID="86ca285bf3cceca125d0a884cc4e982648f339da52313e55aca46d14a2fddd21"
	May 20 10:33:06 addons-091599 kubelet[1492]: E0520 10:33:06.136441    1492 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-6r6ll_default(b5b40db0-7b53-4d4f-821b-1b114daab242)\"" pod="default/hello-world-app-86c47465fc-6r6ll" podUID="b5b40db0-7b53-4d4f-821b-1b114daab242"
	May 20 10:33:06 addons-091599 kubelet[1492]: I0520 10:33:06.155826    1492 scope.go:117] "RemoveContainer" containerID="3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19"
	May 20 10:33:06 addons-091599 kubelet[1492]: E0520 10:33:06.156278    1492 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19\": container with ID starting with 3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19 not found: ID does not exist" containerID="3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19"
	May 20 10:33:06 addons-091599 kubelet[1492]: I0520 10:33:06.156318    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19"} err="failed to get container status \"3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19\": rpc error: code = NotFound desc = could not find container \"3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19\": container with ID starting with 3adc3ddeba70384f8937032330cc340243f059f16fbad7af65b42a7124ba8d19 not found: ID does not exist"
	May 20 10:33:06 addons-091599 kubelet[1492]: I0520 10:33:06.156346    1492 scope.go:117] "RemoveContainer" containerID="72510677f74a53da1899eaf2b9dfb041eeddfbd54cd400cb563fec898352790c"
	May 20 10:33:06 addons-091599 kubelet[1492]: I0520 10:33:06.161151    1492 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3fd1feff-9908-4e6b-a9b5-f9dd37a20987-webhook-cert\") pod \"3fd1feff-9908-4e6b-a9b5-f9dd37a20987\" (UID: \"3fd1feff-9908-4e6b-a9b5-f9dd37a20987\") "
	May 20 10:33:06 addons-091599 kubelet[1492]: I0520 10:33:06.161205    1492 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cktbx\" (UniqueName: \"kubernetes.io/projected/3fd1feff-9908-4e6b-a9b5-f9dd37a20987-kube-api-access-cktbx\") pod \"3fd1feff-9908-4e6b-a9b5-f9dd37a20987\" (UID: \"3fd1feff-9908-4e6b-a9b5-f9dd37a20987\") "
	May 20 10:33:06 addons-091599 kubelet[1492]: I0520 10:33:06.165999    1492 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd1feff-9908-4e6b-a9b5-f9dd37a20987-kube-api-access-cktbx" (OuterVolumeSpecName: "kube-api-access-cktbx") pod "3fd1feff-9908-4e6b-a9b5-f9dd37a20987" (UID: "3fd1feff-9908-4e6b-a9b5-f9dd37a20987"). InnerVolumeSpecName "kube-api-access-cktbx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 10:33:06 addons-091599 kubelet[1492]: I0520 10:33:06.167087    1492 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd1feff-9908-4e6b-a9b5-f9dd37a20987-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "3fd1feff-9908-4e6b-a9b5-f9dd37a20987" (UID: "3fd1feff-9908-4e6b-a9b5-f9dd37a20987"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 20 10:33:06 addons-091599 kubelet[1492]: I0520 10:33:06.261788    1492 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3fd1feff-9908-4e6b-a9b5-f9dd37a20987-webhook-cert\") on node \"addons-091599\" DevicePath \"\""
	May 20 10:33:06 addons-091599 kubelet[1492]: I0520 10:33:06.261830    1492 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cktbx\" (UniqueName: \"kubernetes.io/projected/3fd1feff-9908-4e6b-a9b5-f9dd37a20987-kube-api-access-cktbx\") on node \"addons-091599\" DevicePath \"\""
	May 20 10:33:07 addons-091599 kubelet[1492]: I0520 10:33:07.983899    1492 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd1feff-9908-4e6b-a9b5-f9dd37a20987" path="/var/lib/kubelet/pods/3fd1feff-9908-4e6b-a9b5-f9dd37a20987/volumes"
	
	
	==> storage-provisioner [0245d6608194b64eee2101b06e4cbfc8ab143d324f261c3b80742e761338c8fc] <==
	I0520 10:26:47.138665       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 10:26:47.190627       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 10:26:47.190825       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 10:26:47.389802       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 10:26:47.428791       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"519ef862-bb4e-4780-b3e2-115d6332d3ad", APIVersion:"v1", ResourceVersion:"931", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-091599_5b22d81e-75c6-49c9-b440-170c8ff90cf1 became leader
	I0520 10:26:47.433329       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-091599_5b22d81e-75c6-49c9-b440-170c8ff90cf1!
	I0520 10:26:47.533991       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-091599_5b22d81e-75c6-49c9-b440-170c8ff90cf1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-091599 -n addons-091599
helpers_test.go:261: (dbg) Run:  kubectl --context addons-091599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (311.21s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.95492ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-2952v" [b05bfa4c-b71e-4ba3-82ec-ef3604433ba9] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005463801s
addons_test.go:415: (dbg) Run:  kubectl --context addons-091599 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-091599 top pods -n kube-system: exit status 1 (144.65458ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-b9xf7, age: 4m12.46412651s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-091599 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-091599 top pods -n kube-system: exit status 1 (89.894007ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-b9xf7, age: 4m16.982057844s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-091599 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-091599 top pods -n kube-system: exit status 1 (94.733457ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-b9xf7, age: 4m23.099013174s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-091599 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-091599 top pods -n kube-system: exit status 1 (112.429504ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-b9xf7, age: 4m27.117176095s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-091599 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-091599 top pods -n kube-system: exit status 1 (95.984981ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-b9xf7, age: 4m35.918425243s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-091599 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-091599 top pods -n kube-system: exit status 1 (85.451579ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-b9xf7, age: 4m55.305338357s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-091599 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-091599 top pods -n kube-system: exit status 1 (90.50329ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-b9xf7, age: 5m21.467964526s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-091599 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-091599 top pods -n kube-system: exit status 1 (95.812553ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-b9xf7, age: 5m49.007756367s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-091599 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-091599 top pods -n kube-system: exit status 1 (88.55624ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-b9xf7, age: 6m46.277196143s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-091599 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-091599 top pods -n kube-system: exit status 1 (100.483669ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-b9xf7, age: 8m9.19473104s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-091599 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-091599 top pods -n kube-system: exit status 1 (99.588648ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-b9xf7, age: 9m15.232603198s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-091599
helpers_test.go:235: (dbg) docker inspect addons-091599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8",
	        "Created": "2024-05-20T10:25:32.96313144Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1470184,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-20T10:25:33.277817654Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:56620e18f2c2c9a0448fc43c42f840334bd2baea497ff8deae66477dd0dbfecf",
	        "ResolvConfPath": "/var/lib/docker/containers/44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8/hostname",
	        "HostsPath": "/var/lib/docker/containers/44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8/hosts",
	        "LogPath": "/var/lib/docker/containers/44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8/44b1f0a47fff88b066a601e83e75dfc4e78d8aaff3a2192f2687722fb376faf8-json.log",
	        "Name": "/addons-091599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-091599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-091599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a50bffb6808659c6b2a8ed19423e5bfdb46fd7d7add6c832d59069960daad04b-init/diff:/var/lib/docker/overlay2/85c5c7809a5d893ae54ed3fa4fb6194b99d9d246c69ccb3f2daa2ee41dec0e23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a50bffb6808659c6b2a8ed19423e5bfdb46fd7d7add6c832d59069960daad04b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a50bffb6808659c6b2a8ed19423e5bfdb46fd7d7add6c832d59069960daad04b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a50bffb6808659c6b2a8ed19423e5bfdb46fd7d7add6c832d59069960daad04b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-091599",
	                "Source": "/var/lib/docker/volumes/addons-091599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-091599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-091599",
	                "name.minikube.sigs.k8s.io": "addons-091599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d4bea263c992ec87ac15433b26f3304b0c191d98c61cbfc85046de9f7a426f9d",
	            "SandboxKey": "/var/run/docker/netns/d4bea263c992",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40497"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40493"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40495"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40494"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-091599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "7347e77336db8d8ba56e5c364b3264e0485726a8e13495b5a03984bece7ecde7",
	                    "EndpointID": "32c212faf5763b907738236b5e43f219adfcce86d98737d396908605bddb542e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-091599",
	                        "44b1f0a47fff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-091599 -n addons-091599
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-091599 logs -n 25: (1.685843654s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-692242                                                                     | download-only-692242   | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| delete  | -p download-only-801226                                                                     | download-only-801226   | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| delete  | -p download-only-692242                                                                     | download-only-692242   | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| start   | --download-only -p                                                                          | download-docker-161399 | jenkins | v1.33.1 | 20 May 24 10:25 UTC |                     |
	|         | download-docker-161399                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-161399                                                                   | download-docker-161399 | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-390288   | jenkins | v1.33.1 | 20 May 24 10:25 UTC |                     |
	|         | binary-mirror-390288                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33461                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-390288                                                                     | binary-mirror-390288   | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| addons  | enable dashboard -p                                                                         | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:25 UTC |                     |
	|         | addons-091599                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:25 UTC |                     |
	|         | addons-091599                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-091599 --wait=true                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	|         | -p addons-091599                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-091599 ip                                                                            | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	| addons  | addons-091599 addons disable                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	|         | -p addons-091599                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-091599 ssh cat                                                                       | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	|         | /opt/local-path-provisioner/pvc-2b457869-27d5-410a-999e-eb21b51d4e81_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-091599 addons disable                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:29 UTC | 20 May 24 10:29 UTC |
	|         | addons-091599                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:30 UTC | 20 May 24 10:30 UTC |
	|         | addons-091599                                                                               |                        |         |         |                     |                     |
	| addons  | addons-091599 addons                                                                        | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:30 UTC | 20 May 24 10:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-091599 addons                                                                        | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:30 UTC | 20 May 24 10:30 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-091599 ssh curl -s                                                                   | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:30 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-091599 ip                                                                            | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:32 UTC | 20 May 24 10:32 UTC |
	| addons  | addons-091599 addons disable                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:33 UTC | 20 May 24 10:33 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-091599 addons disable                                                                | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:33 UTC | 20 May 24 10:33 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-091599 addons                                                                        | addons-091599          | jenkins | v1.33.1 | 20 May 24 10:35 UTC | 20 May 24 10:35 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:25:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:25:09.080403 1469715 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:25:09.080573 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:25:09.080601 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:25:09.080620 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:25:09.080903 1469715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 10:25:09.081437 1469715 out.go:298] Setting JSON to false
	I0520 10:25:09.082395 1469715 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":151656,"bootTime":1716049053,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0520 10:25:09.082463 1469715 start.go:139] virtualization:  
	I0520 10:25:09.084930 1469715 out.go:177] * [addons-091599] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 10:25:09.087106 1469715 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:25:09.087298 1469715 notify.go:220] Checking for updates...
	I0520 10:25:09.088623 1469715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:25:09.090592 1469715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 10:25:09.092409 1469715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	I0520 10:25:09.094194 1469715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 10:25:09.095893 1469715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:25:09.097633 1469715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:25:09.118348 1469715 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 10:25:09.118508 1469715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:25:09.180133 1469715 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-05-20 10:25:09.170853173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:25:09.180238 1469715 docker.go:295] overlay module found
	I0520 10:25:09.182303 1469715 out.go:177] * Using the docker driver based on user configuration
	I0520 10:25:09.183888 1469715 start.go:297] selected driver: docker
	I0520 10:25:09.183909 1469715 start.go:901] validating driver "docker" against <nil>
	I0520 10:25:09.183924 1469715 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:25:09.184589 1469715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:25:09.235696 1469715 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-05-20 10:25:09.226639359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:25:09.235871 1469715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:25:09.236130 1469715 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:25:09.237891 1469715 out.go:177] * Using Docker driver with root privileges
	I0520 10:25:09.239577 1469715 cni.go:84] Creating CNI manager for ""
	I0520 10:25:09.239609 1469715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 10:25:09.239629 1469715 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 10:25:09.239709 1469715 start.go:340] cluster config:
	{Name:addons-091599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-091599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:25:09.241843 1469715 out.go:177] * Starting "addons-091599" primary control-plane node in "addons-091599" cluster
	I0520 10:25:09.243460 1469715 cache.go:121] Beginning downloading kic base image for docker with crio
	I0520 10:25:09.245143 1469715 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0520 10:25:09.246707 1469715 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:25:09.246747 1469715 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0520 10:25:09.246765 1469715 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4
	I0520 10:25:09.246774 1469715 cache.go:56] Caching tarball of preloaded images
	I0520 10:25:09.246858 1469715 preload.go:173] Found /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0520 10:25:09.246868 1469715 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:25:09.247250 1469715 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/config.json ...
	I0520 10:25:09.247283 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/config.json: {Name:mk3ac92895713af11e0d1505d2a19f0e41cd4c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:09.261028 1469715 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0520 10:25:09.261152 1469715 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory
	I0520 10:25:09.261172 1469715 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory, skipping pull
	I0520 10:25:09.261179 1469715 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in cache, skipping pull
	I0520 10:25:09.261186 1469715 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a as a tarball
	I0520 10:25:09.261191 1469715 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a from local cache
	I0520 10:25:26.039653 1469715 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a from cached tarball
	I0520 10:25:26.039694 1469715 cache.go:194] Successfully downloaded all kic artifacts
	I0520 10:25:26.039755 1469715 start.go:360] acquireMachinesLock for addons-091599: {Name:mk13a7ebbe82875043afa1a044664bb821768911 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:25:26.039900 1469715 start.go:364] duration metric: took 122.033µs to acquireMachinesLock for "addons-091599"
	I0520 10:25:26.039934 1469715 start.go:93] Provisioning new machine with config: &{Name:addons-091599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-091599 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:25:26.040027 1469715 start.go:125] createHost starting for "" (driver="docker")
	I0520 10:25:26.042409 1469715 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0520 10:25:26.042670 1469715 start.go:159] libmachine.API.Create for "addons-091599" (driver="docker")
	I0520 10:25:26.042706 1469715 client.go:168] LocalClient.Create starting
	I0520 10:25:26.042820 1469715 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem
	I0520 10:25:26.180627 1469715 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem
	I0520 10:25:26.645639 1469715 cli_runner.go:164] Run: docker network inspect addons-091599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0520 10:25:26.660816 1469715 cli_runner.go:211] docker network inspect addons-091599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0520 10:25:26.660930 1469715 network_create.go:281] running [docker network inspect addons-091599] to gather additional debugging logs...
	I0520 10:25:26.660955 1469715 cli_runner.go:164] Run: docker network inspect addons-091599
	W0520 10:25:26.676759 1469715 cli_runner.go:211] docker network inspect addons-091599 returned with exit code 1
	I0520 10:25:26.676797 1469715 network_create.go:284] error running [docker network inspect addons-091599]: docker network inspect addons-091599: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-091599 not found
	I0520 10:25:26.676831 1469715 network_create.go:286] output of [docker network inspect addons-091599]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-091599 not found
	
	** /stderr **
	I0520 10:25:26.676953 1469715 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0520 10:25:26.691979 1469715 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001753e90}
	I0520 10:25:26.692024 1469715 network_create.go:124] attempt to create docker network addons-091599 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0520 10:25:26.692124 1469715 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-091599 addons-091599
	I0520 10:25:26.759341 1469715 network_create.go:108] docker network addons-091599 192.168.49.0/24 created
	I0520 10:25:26.759376 1469715 kic.go:121] calculated static IP "192.168.49.2" for the "addons-091599" container
	I0520 10:25:26.759450 1469715 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0520 10:25:26.773195 1469715 cli_runner.go:164] Run: docker volume create addons-091599 --label name.minikube.sigs.k8s.io=addons-091599 --label created_by.minikube.sigs.k8s.io=true
	I0520 10:25:26.789210 1469715 oci.go:103] Successfully created a docker volume addons-091599
	I0520 10:25:26.789319 1469715 cli_runner.go:164] Run: docker run --rm --name addons-091599-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-091599 --entrypoint /usr/bin/test -v addons-091599:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0520 10:25:28.758648 1469715 cli_runner.go:217] Completed: docker run --rm --name addons-091599-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-091599 --entrypoint /usr/bin/test -v addons-091599:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib: (1.969286568s)
	I0520 10:25:28.758677 1469715 oci.go:107] Successfully prepared a docker volume addons-091599
	I0520 10:25:28.758704 1469715 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:25:28.758723 1469715 kic.go:194] Starting extracting preloaded images to volume ...
	I0520 10:25:28.758812 1469715 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-091599:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0520 10:25:32.888774 1469715 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-091599:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.129921014s)
	I0520 10:25:32.888808 1469715 kic.go:203] duration metric: took 4.130080764s to extract preloaded images to volume ...
	W0520 10:25:32.888961 1469715 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0520 10:25:32.889077 1469715 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0520 10:25:32.946323 1469715 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-091599 --name addons-091599 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-091599 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-091599 --network addons-091599 --ip 192.168.49.2 --volume addons-091599:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0520 10:25:33.285520 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Running}}
	I0520 10:25:33.309772 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:25:33.332766 1469715 cli_runner.go:164] Run: docker exec addons-091599 stat /var/lib/dpkg/alternatives/iptables
	I0520 10:25:33.409349 1469715 oci.go:144] the created container "addons-091599" has a running status.
	I0520 10:25:33.409381 1469715 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa...
	I0520 10:25:33.600927 1469715 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0520 10:25:33.626113 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:25:33.645397 1469715 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0520 10:25:33.645417 1469715 kic_runner.go:114] Args: [docker exec --privileged addons-091599 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0520 10:25:33.717170 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:25:33.737840 1469715 machine.go:94] provisionDockerMachine start ...
	I0520 10:25:33.737926 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:33.762824 1469715 main.go:141] libmachine: Using SSH client type: native
	I0520 10:25:33.763192 1469715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40497 <nil> <nil>}
	I0520 10:25:33.763211 1469715 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 10:25:33.763904 1469715 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0520 10:25:36.893223 1469715 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-091599
	
	I0520 10:25:36.893249 1469715 ubuntu.go:169] provisioning hostname "addons-091599"
	I0520 10:25:36.893324 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:36.911530 1469715 main.go:141] libmachine: Using SSH client type: native
	I0520 10:25:36.911784 1469715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40497 <nil> <nil>}
	I0520 10:25:36.911803 1469715 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-091599 && echo "addons-091599" | sudo tee /etc/hostname
	I0520 10:25:37.050995 1469715 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-091599
	
	I0520 10:25:37.051098 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:37.068361 1469715 main.go:141] libmachine: Using SSH client type: native
	I0520 10:25:37.068603 1469715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40497 <nil> <nil>}
	I0520 10:25:37.068619 1469715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-091599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-091599/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-091599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:25:37.193660 1469715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:25:37.193687 1469715 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18925-1463640/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-1463640/.minikube}
	I0520 10:25:37.193711 1469715 ubuntu.go:177] setting up certificates
	I0520 10:25:37.193720 1469715 provision.go:84] configureAuth start
	I0520 10:25:37.193789 1469715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-091599
	I0520 10:25:37.216602 1469715 provision.go:143] copyHostCerts
	I0520 10:25:37.216685 1469715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.pem (1082 bytes)
	I0520 10:25:37.216820 1469715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/cert.pem (1123 bytes)
	I0520 10:25:37.216878 1469715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/key.pem (1679 bytes)
	I0520 10:25:37.216922 1469715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem org=jenkins.addons-091599 san=[127.0.0.1 192.168.49.2 addons-091599 localhost minikube]
	I0520 10:25:37.836687 1469715 provision.go:177] copyRemoteCerts
	I0520 10:25:37.836819 1469715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:25:37.836863 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:37.852667 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:25:37.942307 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 10:25:37.966185 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 10:25:37.989888 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 10:25:38.016225 1469715 provision.go:87] duration metric: took 822.492148ms to configureAuth
	I0520 10:25:38.016254 1469715 ubuntu.go:193] setting minikube options for container-runtime
	I0520 10:25:38.016480 1469715 config.go:182] Loaded profile config "addons-091599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:25:38.016596 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:38.034046 1469715 main.go:141] libmachine: Using SSH client type: native
	I0520 10:25:38.034298 1469715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40497 <nil> <nil>}
	I0520 10:25:38.034319 1469715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:25:38.265065 1469715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:25:38.265086 1469715 machine.go:97] duration metric: took 4.527227871s to provisionDockerMachine
	I0520 10:25:38.265102 1469715 client.go:171] duration metric: took 12.222379788s to LocalClient.Create
	I0520 10:25:38.265114 1469715 start.go:167] duration metric: took 12.222445706s to libmachine.API.Create "addons-091599"
	I0520 10:25:38.265121 1469715 start.go:293] postStartSetup for "addons-091599" (driver="docker")
	I0520 10:25:38.265132 1469715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:25:38.265199 1469715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:25:38.265239 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:38.285298 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:25:38.378680 1469715 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:25:38.381724 1469715 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0520 10:25:38.381762 1469715 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0520 10:25:38.381774 1469715 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0520 10:25:38.381782 1469715 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0520 10:25:38.381797 1469715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-1463640/.minikube/addons for local assets ...
	I0520 10:25:38.381864 1469715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-1463640/.minikube/files for local assets ...
	I0520 10:25:38.381900 1469715 start.go:296] duration metric: took 116.773025ms for postStartSetup
	I0520 10:25:38.382203 1469715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-091599
	I0520 10:25:38.397956 1469715 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/config.json ...
	I0520 10:25:38.398243 1469715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:25:38.398294 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:38.413400 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:25:38.506518 1469715 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0520 10:25:38.510812 1469715 start.go:128] duration metric: took 12.470768346s to createHost
	I0520 10:25:38.510839 1469715 start.go:83] releasing machines lock for "addons-091599", held for 12.470923428s
	I0520 10:25:38.510912 1469715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-091599
	I0520 10:25:38.527317 1469715 ssh_runner.go:195] Run: cat /version.json
	I0520 10:25:38.527388 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:38.527439 1469715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:25:38.527508 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:25:38.559200 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:25:38.560136 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:25:38.645352 1469715 ssh_runner.go:195] Run: systemctl --version
	I0520 10:25:38.759630 1469715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:25:38.903020 1469715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 10:25:38.907324 1469715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:25:38.929290 1469715 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0520 10:25:38.929415 1469715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:25:38.960715 1469715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0520 10:25:38.960737 1469715 start.go:494] detecting cgroup driver to use...
	I0520 10:25:38.960775 1469715 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0520 10:25:38.960826 1469715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:25:38.979364 1469715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:25:38.992856 1469715 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:25:38.992994 1469715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:25:39.009434 1469715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:25:39.026353 1469715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:25:39.128752 1469715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:25:39.226715 1469715 docker.go:233] disabling docker service ...
	I0520 10:25:39.226829 1469715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:25:39.249326 1469715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:25:39.262535 1469715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:25:39.354753 1469715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:25:39.455469 1469715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:25:39.466921 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:25:39.483780 1469715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:25:39.483852 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.494428 1469715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:25:39.494520 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.504557 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.514380 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.524517 1469715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:25:39.533760 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.544101 1469715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.560573 1469715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:25:39.571064 1469715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:25:39.580236 1469715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:25:39.589253 1469715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:25:39.674366 1469715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:25:39.803394 1469715 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:25:39.803505 1469715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:25:39.807728 1469715 start.go:562] Will wait 60s for crictl version
	I0520 10:25:39.807820 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:25:39.811172 1469715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:25:39.850093 1469715 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0520 10:25:39.850225 1469715 ssh_runner.go:195] Run: crio --version
	I0520 10:25:39.892660 1469715 ssh_runner.go:195] Run: crio --version
	I0520 10:25:39.934189 1469715 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.24.6 ...
	I0520 10:25:39.936064 1469715 cli_runner.go:164] Run: docker network inspect addons-091599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0520 10:25:39.950266 1469715 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0520 10:25:39.953884 1469715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:25:39.964412 1469715 kubeadm.go:877] updating cluster {Name:addons-091599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-091599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 10:25:39.964542 1469715 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:25:39.964609 1469715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:25:40.058719 1469715 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:25:40.058743 1469715 crio.go:433] Images already preloaded, skipping extraction
	I0520 10:25:40.058810 1469715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:25:40.099576 1469715 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:25:40.099603 1469715 cache_images.go:84] Images are preloaded, skipping loading
	I0520 10:25:40.099613 1469715 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 crio true true} ...
	I0520 10:25:40.099726 1469715 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-091599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-091599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:25:40.099834 1469715 ssh_runner.go:195] Run: crio config
	I0520 10:25:40.150330 1469715 cni.go:84] Creating CNI manager for ""
	I0520 10:25:40.150354 1469715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 10:25:40.150363 1469715 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 10:25:40.150408 1469715 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-091599 NodeName:addons-091599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 10:25:40.150588 1469715 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-091599"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 10:25:40.150669 1469715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:25:40.160762 1469715 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 10:25:40.160880 1469715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 10:25:40.170118 1469715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0520 10:25:40.188254 1469715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:25:40.206672 1469715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0520 10:25:40.224342 1469715 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0520 10:25:40.227700 1469715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:25:40.238259 1469715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:25:40.319043 1469715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:25:40.332592 1469715 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599 for IP: 192.168.49.2
	I0520 10:25:40.332670 1469715 certs.go:194] generating shared ca certs ...
	I0520 10:25:40.332703 1469715 certs.go:226] acquiring lock for ca certs: {Name:mke113fbac30e255083f63bab9dafb629ead7667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:40.332874 1469715 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key
	I0520 10:25:40.587546 1469715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt ...
	I0520 10:25:40.587581 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt: {Name:mka4f6d7c1010d187841c8e9323a4a2f71d05d5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:40.587813 1469715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key ...
	I0520 10:25:40.587828 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key: {Name:mk3e2eb9d9ca29aa42fb5e69046b5e5858b088cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:40.587927 1469715 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key
	I0520 10:25:40.949528 1469715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.crt ...
	I0520 10:25:40.949562 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.crt: {Name:mk9ab1563bc061863c83b70b953645aba3460f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:40.950818 1469715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key ...
	I0520 10:25:40.950839 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key: {Name:mkcbd84db3d5170ba1258a231d433cab816854ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:40.950972 1469715 certs.go:256] generating profile certs ...
	I0520 10:25:40.951042 1469715 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.key
	I0520 10:25:40.951065 1469715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt with IP's: []
	I0520 10:25:41.829835 1469715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt ...
	I0520 10:25:41.829881 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: {Name:mkb5fcf325622e3c9a0048438f88c8b12065563b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:41.830088 1469715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.key ...
	I0520 10:25:41.830103 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.key: {Name:mkd2dc77edbe3f95362d0d11399740c9ccfbe043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:41.830197 1469715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key.e700fd87
	I0520 10:25:41.830225 1469715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt.e700fd87 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0520 10:25:42.187828 1469715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt.e700fd87 ...
	I0520 10:25:42.187871 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt.e700fd87: {Name:mk210e5cd684329a0af9a80844914a0601cf4e99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:42.188590 1469715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key.e700fd87 ...
	I0520 10:25:42.188618 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key.e700fd87: {Name:mk483ff63e42360c514f19d5304d4a7595702090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:42.188770 1469715 certs.go:381] copying /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt.e700fd87 -> /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt
	I0520 10:25:42.188872 1469715 certs.go:385] copying /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key.e700fd87 -> /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key
	I0520 10:25:42.188953 1469715 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.key
	I0520 10:25:42.188990 1469715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.crt with IP's: []
	I0520 10:25:42.720110 1469715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.crt ...
	I0520 10:25:42.720145 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.crt: {Name:mk755bdcbc9aac122bf017d6e27211d5de37f0f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:42.720806 1469715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.key ...
	I0520 10:25:42.720825 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.key: {Name:mk936e69555d89195e2c964aa757467423411687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:25:42.721030 1469715 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 10:25:42.721077 1469715 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem (1082 bytes)
	I0520 10:25:42.721107 1469715 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:25:42.721140 1469715 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem (1679 bytes)
	I0520 10:25:42.721784 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:25:42.747166 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:25:42.771351 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:25:42.797107 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 10:25:42.820806 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 10:25:42.844977 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 10:25:42.868336 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:25:42.891509 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 10:25:42.916132 1469715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:25:42.941488 1469715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 10:25:42.959322 1469715 ssh_runner.go:195] Run: openssl version
	I0520 10:25:42.964660 1469715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:25:42.974046 1469715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:25:42.977508 1469715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:25:42.977578 1469715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:25:42.984471 1469715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:25:42.993801 1469715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:25:42.997045 1469715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 10:25:42.997100 1469715 kubeadm.go:391] StartCluster: {Name:addons-091599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-091599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:25:42.997190 1469715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 10:25:42.997258 1469715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 10:25:43.038409 1469715 cri.go:89] found id: ""
	I0520 10:25:43.038530 1469715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 10:25:43.047529 1469715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 10:25:43.056759 1469715 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0520 10:25:43.056829 1469715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 10:25:43.065876 1469715 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 10:25:43.065896 1469715 kubeadm.go:156] found existing configuration files:
	
	I0520 10:25:43.065951 1469715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 10:25:43.074759 1469715 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 10:25:43.074829 1469715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 10:25:43.083363 1469715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 10:25:43.092022 1469715 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 10:25:43.092120 1469715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 10:25:43.100272 1469715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 10:25:43.108570 1469715 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 10:25:43.108651 1469715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 10:25:43.116988 1469715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 10:25:43.125362 1469715 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 10:25:43.125455 1469715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 10:25:43.133954 1469715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0520 10:25:43.180900 1469715 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 10:25:43.181128 1469715 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 10:25:43.219107 1469715 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0520 10:25:43.219223 1469715 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1058-aws
	I0520 10:25:43.219276 1469715 kubeadm.go:309] OS: Linux
	I0520 10:25:43.219345 1469715 kubeadm.go:309] CGROUPS_CPU: enabled
	I0520 10:25:43.219418 1469715 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0520 10:25:43.219490 1469715 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0520 10:25:43.219553 1469715 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0520 10:25:43.219623 1469715 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0520 10:25:43.219694 1469715 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0520 10:25:43.219763 1469715 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0520 10:25:43.219830 1469715 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0520 10:25:43.219899 1469715 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0520 10:25:43.297971 1469715 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 10:25:43.298125 1469715 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 10:25:43.298239 1469715 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 10:25:43.564279 1469715 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 10:25:43.569008 1469715 out.go:204]   - Generating certificates and keys ...
	I0520 10:25:43.569210 1469715 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 10:25:43.569301 1469715 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 10:25:43.926684 1469715 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 10:25:44.317123 1469715 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 10:25:44.608533 1469715 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 10:25:44.786528 1469715 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 10:25:45.235651 1469715 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 10:25:45.235819 1469715 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-091599 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0520 10:25:45.983063 1469715 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 10:25:45.983206 1469715 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-091599 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0520 10:25:46.681532 1469715 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 10:25:47.064603 1469715 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 10:25:47.262685 1469715 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 10:25:47.262957 1469715 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 10:25:47.633283 1469715 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 10:25:47.987177 1469715 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 10:25:48.585760 1469715 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 10:25:48.728731 1469715 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 10:25:50.176830 1469715 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 10:25:50.177716 1469715 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 10:25:50.182372 1469715 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 10:25:50.184464 1469715 out.go:204]   - Booting up control plane ...
	I0520 10:25:50.184567 1469715 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 10:25:50.184644 1469715 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 10:25:50.185368 1469715 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 10:25:50.195670 1469715 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 10:25:50.196937 1469715 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 10:25:50.197007 1469715 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 10:25:50.286247 1469715 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 10:25:50.286339 1469715 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 10:25:51.787533 1469715 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.501460471s
	I0520 10:25:51.787619 1469715 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 10:25:57.289145 1469715 kubeadm.go:309] [api-check] The API server is healthy after 5.501823227s
	I0520 10:25:57.311864 1469715 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 10:25:57.326168 1469715 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 10:25:57.349968 1469715 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 10:25:57.350158 1469715 kubeadm.go:309] [mark-control-plane] Marking the node addons-091599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 10:25:57.361988 1469715 kubeadm.go:309] [bootstrap-token] Using token: zcfe6y.yzbrm53m11ivv0h7
	I0520 10:25:57.363945 1469715 out.go:204]   - Configuring RBAC rules ...
	I0520 10:25:57.364078 1469715 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 10:25:57.370969 1469715 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 10:25:57.384207 1469715 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 10:25:57.387563 1469715 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 10:25:57.391438 1469715 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 10:25:57.397185 1469715 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 10:25:57.696324 1469715 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 10:25:58.128329 1469715 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 10:25:58.695809 1469715 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 10:25:58.696800 1469715 kubeadm.go:309] 
	I0520 10:25:58.696876 1469715 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 10:25:58.696890 1469715 kubeadm.go:309] 
	I0520 10:25:58.696969 1469715 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 10:25:58.696981 1469715 kubeadm.go:309] 
	I0520 10:25:58.697007 1469715 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 10:25:58.697070 1469715 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 10:25:58.697124 1469715 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 10:25:58.697132 1469715 kubeadm.go:309] 
	I0520 10:25:58.697184 1469715 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 10:25:58.697192 1469715 kubeadm.go:309] 
	I0520 10:25:58.697238 1469715 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 10:25:58.697246 1469715 kubeadm.go:309] 
	I0520 10:25:58.697296 1469715 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 10:25:58.697381 1469715 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 10:25:58.697451 1469715 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 10:25:58.697459 1469715 kubeadm.go:309] 
	I0520 10:25:58.697545 1469715 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 10:25:58.697622 1469715 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 10:25:58.697630 1469715 kubeadm.go:309] 
	I0520 10:25:58.697726 1469715 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zcfe6y.yzbrm53m11ivv0h7 \
	I0520 10:25:58.697829 1469715 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e4ec3248f7179a7e7b3262b27f9565d878f3b66abe6f06904dcca5f386d0f173 \
	I0520 10:25:58.697854 1469715 kubeadm.go:309] 	--control-plane 
	I0520 10:25:58.697862 1469715 kubeadm.go:309] 
	I0520 10:25:58.697944 1469715 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 10:25:58.697952 1469715 kubeadm.go:309] 
	I0520 10:25:58.698030 1469715 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zcfe6y.yzbrm53m11ivv0h7 \
	I0520 10:25:58.698132 1469715 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e4ec3248f7179a7e7b3262b27f9565d878f3b66abe6f06904dcca5f386d0f173 
	I0520 10:25:58.701638 1469715 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1058-aws\n", err: exit status 1
	I0520 10:25:58.701783 1469715 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 10:25:58.701820 1469715 cni.go:84] Creating CNI manager for ""
	I0520 10:25:58.701834 1469715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 10:25:58.704152 1469715 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 10:25:58.705748 1469715 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 10:25:58.711243 1469715 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 10:25:58.711269 1469715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 10:25:58.734331 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 10:25:58.990818 1469715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 10:25:58.990987 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-091599 minikube.k8s.io/updated_at=2024_05_20T10_25_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=addons-091599 minikube.k8s.io/primary=true
	I0520 10:25:58.991011 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:25:59.137983 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:25:59.138059 1469715 ops.go:34] apiserver oom_adj: -16
	I0520 10:25:59.638875 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:00.139060 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:00.638694 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:01.138150 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:01.638545 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:02.138119 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:02.638803 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:03.138106 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:03.638255 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:04.138295 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:04.638113 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:05.138909 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:05.638804 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:06.138873 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:06.638372 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:07.138944 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:07.638487 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:08.138869 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:08.638644 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:09.138285 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:09.638284 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:10.138132 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:10.638517 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:11.138923 1469715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:26:11.229865 1469715 kubeadm.go:1107] duration metric: took 12.238969721s to wait for elevateKubeSystemPrivileges
	W0520 10:26:11.229904 1469715 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 10:26:11.229912 1469715 kubeadm.go:393] duration metric: took 28.232819608s to StartCluster
	I0520 10:26:11.229929 1469715 settings.go:142] acquiring lock: {Name:mkcb442de9baf8dd2fb339ccf162868e80429e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:26:11.230508 1469715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 10:26:11.230901 1469715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/kubeconfig: {Name:mk86e76ecc665bde4f67c226ceb67716f06a54d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:26:11.231127 1469715 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:26:11.233779 1469715 out.go:177] * Verifying Kubernetes components...
	I0520 10:26:11.231228 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 10:26:11.231390 1469715 config.go:182] Loaded profile config "addons-091599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:26:11.231399 1469715 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0520 10:26:11.235785 1469715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:26:11.235790 1469715 addons.go:69] Setting yakd=true in profile "addons-091599"
	I0520 10:26:11.235820 1469715 addons.go:234] Setting addon yakd=true in "addons-091599"
	I0520 10:26:11.235859 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.235876 1469715 addons.go:69] Setting ingress-dns=true in profile "addons-091599"
	I0520 10:26:11.235897 1469715 addons.go:234] Setting addon ingress-dns=true in "addons-091599"
	I0520 10:26:11.235925 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.236365 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.236413 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.237027 1469715 addons.go:69] Setting cloud-spanner=true in profile "addons-091599"
	I0520 10:26:11.237077 1469715 addons.go:234] Setting addon cloud-spanner=true in "addons-091599"
	I0520 10:26:11.237105 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.237587 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.237992 1469715 addons.go:69] Setting inspektor-gadget=true in profile "addons-091599"
	I0520 10:26:11.238024 1469715 addons.go:234] Setting addon inspektor-gadget=true in "addons-091599"
	I0520 10:26:11.238052 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.238440 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.240137 1469715 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-091599"
	I0520 10:26:11.240227 1469715 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-091599"
	I0520 10:26:11.240261 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.240757 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.242743 1469715 addons.go:69] Setting metrics-server=true in profile "addons-091599"
	I0520 10:26:11.242794 1469715 addons.go:234] Setting addon metrics-server=true in "addons-091599"
	I0520 10:26:11.242831 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.243303 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.247496 1469715 addons.go:69] Setting default-storageclass=true in profile "addons-091599"
	I0520 10:26:11.247570 1469715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-091599"
	I0520 10:26:11.247938 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.250377 1469715 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-091599"
	I0520 10:26:11.250439 1469715 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-091599"
	I0520 10:26:11.250481 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.252737 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.259581 1469715 addons.go:69] Setting gcp-auth=true in profile "addons-091599"
	I0520 10:26:11.259649 1469715 mustload.go:65] Loading cluster: addons-091599
	I0520 10:26:11.259875 1469715 config.go:182] Loaded profile config "addons-091599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:26:11.260175 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.266089 1469715 addons.go:69] Setting registry=true in profile "addons-091599"
	I0520 10:26:11.266142 1469715 addons.go:234] Setting addon registry=true in "addons-091599"
	I0520 10:26:11.266187 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.266767 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.275218 1469715 addons.go:69] Setting ingress=true in profile "addons-091599"
	I0520 10:26:11.275270 1469715 addons.go:234] Setting addon ingress=true in "addons-091599"
	I0520 10:26:11.275328 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.275899 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.276082 1469715 addons.go:69] Setting storage-provisioner=true in profile "addons-091599"
	I0520 10:26:11.276107 1469715 addons.go:234] Setting addon storage-provisioner=true in "addons-091599"
	I0520 10:26:11.276139 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.276567 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.305036 1469715 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-091599"
	I0520 10:26:11.305165 1469715 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-091599"
	I0520 10:26:11.305742 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.328375 1469715 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0520 10:26:11.327283 1469715 addons.go:69] Setting volumesnapshots=true in profile "addons-091599"
	I0520 10:26:11.338133 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.353638 1469715 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 10:26:11.353702 1469715 addons.go:234] Setting addon volumesnapshots=true in "addons-091599"
	I0520 10:26:11.359982 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.360662 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.361226 1469715 out.go:177]   - Using image docker.io/registry:2.8.3
	I0520 10:26:11.386671 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0520 10:26:11.373798 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0520 10:26:11.401066 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0520 10:26:11.455646 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0520 10:26:11.405928 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.458996 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0520 10:26:11.464466 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0520 10:26:11.466767 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0520 10:26:11.469424 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0520 10:26:11.472908 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0520 10:26:11.467010 1469715 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0520 10:26:11.467016 1469715 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0520 10:26:11.467954 1469715 addons.go:234] Setting addon default-storageclass=true in "addons-091599"
	I0520 10:26:11.475290 1469715 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-091599"
	I0520 10:26:11.480890 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0520 10:26:11.481071 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.483082 1469715 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0520 10:26:11.483090 1469715 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	I0520 10:26:11.483096 1469715 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0520 10:26:11.483184 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:11.483197 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0520 10:26:11.485159 1469715 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 10:26:11.485270 1469715 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0520 10:26:11.489248 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.497923 1469715 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0520 10:26:11.492766 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:11.492797 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.492808 1469715 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 10:26:11.492814 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0520 10:26:11.499988 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.522179 1469715 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:26:11.511051 1469715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 10:26:11.511174 1469715 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0520 10:26:11.511182 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0520 10:26:11.511186 1469715 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0520 10:26:11.515471 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.530461 1469715 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0520 10:26:11.532274 1469715 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 10:26:11.532299 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0520 10:26:11.532378 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.555861 1469715 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:26:11.529080 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0520 10:26:11.529089 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0520 10:26:11.529094 1469715 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0520 10:26:11.558182 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.563754 1469715 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:26:11.563775 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 10:26:11.563841 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.581220 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.590146 1469715 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 10:26:11.590169 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0520 10:26:11.590237 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.611056 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.625791 1469715 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0520 10:26:11.632538 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0520 10:26:11.632578 1469715 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0520 10:26:11.632676 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.644066 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.719137 1469715 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 10:26:11.719160 1469715 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 10:26:11.719246 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.741418 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.756201 1469715 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0520 10:26:11.759168 1469715 out.go:177]   - Using image docker.io/busybox:stable
	I0520 10:26:11.761517 1469715 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 10:26:11.761539 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0520 10:26:11.761612 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:11.763570 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.765173 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.768164 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.778293 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.796773 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.798544 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 10:26:11.798688 1469715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:26:11.814090 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.820462 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.841928 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.853823 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.862780 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:11.875702 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:12.102263 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 10:26:12.189319 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 10:26:12.235970 1469715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 10:26:12.236006 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0520 10:26:12.244210 1469715 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0520 10:26:12.244232 1469715 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0520 10:26:12.249611 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 10:26:12.299325 1469715 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0520 10:26:12.299411 1469715 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0520 10:26:12.325242 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0520 10:26:12.325347 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0520 10:26:12.338195 1469715 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0520 10:26:12.338299 1469715 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0520 10:26:12.338709 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:26:12.350466 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0520 10:26:12.362260 1469715 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0520 10:26:12.362344 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0520 10:26:12.450976 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 10:26:12.464319 1469715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 10:26:12.464348 1469715 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 10:26:12.476032 1469715 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0520 10:26:12.476057 1469715 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0520 10:26:12.484462 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0520 10:26:12.484500 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0520 10:26:12.505221 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0520 10:26:12.556846 1469715 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0520 10:26:12.556873 1469715 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0520 10:26:12.565968 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 10:26:12.585576 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0520 10:26:12.585601 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0520 10:26:12.644557 1469715 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0520 10:26:12.644582 1469715 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0520 10:26:12.677454 1469715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 10:26:12.677479 1469715 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 10:26:12.701831 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0520 10:26:12.701856 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0520 10:26:12.754277 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0520 10:26:12.754302 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0520 10:26:12.769867 1469715 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0520 10:26:12.769895 1469715 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0520 10:26:12.813419 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0520 10:26:12.813444 1469715 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0520 10:26:12.871103 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 10:26:12.915806 1469715 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0520 10:26:12.915829 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0520 10:26:12.951391 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0520 10:26:12.951415 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0520 10:26:12.961125 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0520 10:26:12.961150 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0520 10:26:12.991189 1469715 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:26:12.991214 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0520 10:26:13.077429 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0520 10:26:13.077455 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0520 10:26:13.087100 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0520 10:26:13.099936 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0520 10:26:13.099962 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0520 10:26:13.132330 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:26:13.148815 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0520 10:26:13.148844 1469715 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0520 10:26:13.171125 1469715 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0520 10:26:13.171152 1469715 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0520 10:26:13.260548 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0520 10:26:13.260572 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0520 10:26:13.263041 1469715 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 10:26:13.263066 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0520 10:26:13.356130 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0520 10:26:13.356164 1469715 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0520 10:26:13.396778 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 10:26:13.439080 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0520 10:26:13.439106 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0520 10:26:13.634073 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0520 10:26:13.634142 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0520 10:26:13.804493 1469715 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 10:26:13.804584 1469715 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0520 10:26:13.907402 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 10:26:14.992240 1469715 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.193658049s)
	I0520 10:26:14.992267 1469715 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0520 10:26:14.993386 1469715 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.194679507s)
	I0520 10:26:14.994567 1469715 node_ready.go:35] waiting up to 6m0s for node "addons-091599" to be "Ready" ...
	I0520 10:26:15.811252 1469715 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-091599" context rescaled to 1 replicas
	I0520 10:26:16.316440 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.214087025s)
	I0520 10:26:16.826679 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.637276999s)
	I0520 10:26:16.826863 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.577137353s)
	I0520 10:26:16.826920 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.488167038s)
	I0520 10:26:16.826974 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.47644844s)
	I0520 10:26:17.029739 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:17.844451 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.393435631s)
	I0520 10:26:17.844496 1469715 addons.go:470] Verifying addon ingress=true in "addons-091599"
	I0520 10:26:17.847278 1469715 out.go:177] * Verifying ingress addon...
	I0520 10:26:17.844650 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.339400652s)
	I0520 10:26:17.844668 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.278677511s)
	I0520 10:26:17.844809 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.973678705s)
	I0520 10:26:17.844839 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.757713441s)
	I0520 10:26:17.844937 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.712578193s)
	I0520 10:26:17.845021 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.448206871s)
	I0520 10:26:17.851142 1469715 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0520 10:26:17.851370 1469715 addons.go:470] Verifying addon registry=true in "addons-091599"
	I0520 10:26:17.854522 1469715 out.go:177] * Verifying registry addon...
	I0520 10:26:17.851706 1469715 addons.go:470] Verifying addon metrics-server=true in "addons-091599"
	W0520 10:26:17.851731 1469715 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 10:26:17.857192 1469715 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-091599 service yakd-dashboard -n yakd-dashboard
	
	I0520 10:26:17.857244 1469715 retry.go:31] will retry after 161.96594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 10:26:17.858186 1469715 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0520 10:26:17.878996 1469715 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0520 10:26:17.879024 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:17.891388 1469715 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 10:26:17.891414 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:18.032071 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:26:18.386920 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:18.387650 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:18.586958 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.679438936s)
	I0520 10:26:18.586994 1469715 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-091599"
	I0520 10:26:18.590148 1469715 out.go:177] * Verifying csi-hostpath-driver addon...
	I0520 10:26:18.595176 1469715 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0520 10:26:18.674366 1469715 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 10:26:18.674390 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:18.855261 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:18.874851 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:19.103647 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:19.355453 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:19.376514 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:19.498843 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:19.616306 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:19.855443 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:19.873860 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:20.100930 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:20.355580 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:20.374334 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:20.607615 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:20.856298 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:20.873366 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:21.126424 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:21.157832 1469715 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0520 10:26:21.157977 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:21.209990 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:21.356472 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:21.406989 1469715 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0520 10:26:21.416612 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:21.451857 1469715 addons.go:234] Setting addon gcp-auth=true in "addons-091599"
	I0520 10:26:21.451911 1469715 host.go:66] Checking if "addons-091599" exists ...
	I0520 10:26:21.452473 1469715 cli_runner.go:164] Run: docker container inspect addons-091599 --format={{.State.Status}}
	I0520 10:26:21.479464 1469715 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0520 10:26:21.479533 1469715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-091599
	I0520 10:26:21.501417 1469715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40497 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/addons-091599/id_rsa Username:docker}
	I0520 10:26:21.506512 1469715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.474391891s)
	I0520 10:26:21.517635 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:21.610806 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:21.627914 1469715 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:26:21.630590 1469715 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0520 10:26:21.633181 1469715 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0520 10:26:21.633250 1469715 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0520 10:26:21.660418 1469715 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0520 10:26:21.660492 1469715 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0520 10:26:21.683116 1469715 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 10:26:21.683187 1469715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0520 10:26:21.704479 1469715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 10:26:21.857447 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:21.873479 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:22.099646 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:22.328248 1469715 addons.go:470] Verifying addon gcp-auth=true in "addons-091599"
	I0520 10:26:22.331239 1469715 out.go:177] * Verifying gcp-auth addon...
	I0520 10:26:22.334738 1469715 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0520 10:26:22.338442 1469715 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0520 10:26:22.338466 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:22.355817 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:22.373806 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:22.602050 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:22.838697 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:22.855898 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:22.872880 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:23.100107 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:23.341221 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:23.357948 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:23.376101 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:23.604150 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:23.845959 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:23.856317 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:23.873688 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:23.998676 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:24.100123 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:24.338300 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:24.355945 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:24.372785 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:24.600731 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:24.839539 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:24.855622 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:24.873673 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:25.100181 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:25.338075 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:25.355896 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:25.373845 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:25.601226 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:25.838297 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:25.855211 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:25.873816 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:26.099547 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:26.338637 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:26.355648 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:26.373927 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:26.497763 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:26.603576 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:26.838634 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:26.855380 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:26.873263 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:27.099940 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:27.339238 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:27.356238 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:27.373083 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:27.599960 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:27.839001 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:27.855786 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:27.873754 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:28.099584 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:28.338073 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:28.356127 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:28.373013 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:28.498729 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:28.600654 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:28.839370 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:28.855560 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:28.873406 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:29.100212 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:29.340040 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:29.356423 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:29.373254 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:29.604705 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:29.838019 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:29.856060 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:29.872840 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:30.099896 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:30.338837 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:30.355742 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:30.373695 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:30.498794 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:30.600075 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:30.839246 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:30.855502 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:30.877105 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:31.100641 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:31.338644 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:31.355314 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:31.373302 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:31.613764 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:31.839344 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:31.855662 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:31.873394 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:32.100055 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:32.338727 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:32.355539 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:32.373488 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:32.599576 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:32.839658 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:32.855598 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:32.873421 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:32.998656 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:33.100111 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:33.338250 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:33.355666 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:33.373338 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:33.600362 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:33.838396 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:33.855941 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:33.873428 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:34.099895 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:34.338227 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:34.356048 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:34.373575 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:34.604088 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:34.839288 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:34.855142 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:34.872973 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:35.099889 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:35.338786 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:35.356221 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:35.373088 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:35.497908 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:35.599975 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:35.838565 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:35.854990 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:35.872952 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:36.099095 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:36.338721 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:36.355742 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:36.373803 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:36.608169 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:36.838847 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:36.855380 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:36.873223 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:37.100073 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:37.338432 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:37.355434 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:37.373386 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:37.499272 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:37.599976 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:37.839639 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:37.855906 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:37.873609 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:38.099912 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:38.338377 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:38.356022 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:38.372724 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:38.601467 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:38.839163 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:38.854929 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:38.872896 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:39.099974 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:39.339557 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:39.355100 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:39.372896 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:39.604734 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:39.838606 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:39.855064 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:39.874195 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:39.998189 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:40.100705 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:40.338678 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:40.355435 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:40.373290 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:40.601248 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:40.839098 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:40.855979 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:40.872905 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:41.100074 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:41.339783 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:41.356368 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:41.373386 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:41.604612 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:41.841066 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:41.855605 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:41.873675 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:42.105109 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:42.338973 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:42.356213 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:42.373509 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:42.498810 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:42.600416 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:42.839949 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:42.855868 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:42.873784 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:43.099291 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:43.338748 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:43.355665 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:43.373676 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:43.599691 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:43.839243 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:43.855573 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:43.873419 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:44.099590 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:44.338071 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:44.355829 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:44.373828 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:44.601955 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:44.839035 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:44.856189 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:44.872885 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:44.997599 1469715 node_ready.go:53] node "addons-091599" has status "Ready":"False"
	I0520 10:26:45.100227 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:45.338675 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:45.354960 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:45.372765 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:45.604670 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:45.839597 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:45.854989 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:45.874793 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:46.099866 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:46.338527 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:46.355444 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:46.373247 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:46.521067 1469715 node_ready.go:49] node "addons-091599" has status "Ready":"True"
	I0520 10:26:46.521143 1469715 node_ready.go:38] duration metric: took 31.526499444s for node "addons-091599" to be "Ready" ...
	I0520 10:26:46.521169 1469715 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:26:46.547669 1469715 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b9xf7" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:46.606094 1469715 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 10:26:46.606163 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:46.995668 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:46.996342 1469715 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 10:26:46.996373 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:46.996408 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:47.136259 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:47.379270 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:47.380101 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:47.384260 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:47.600994 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:47.840979 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:47.856363 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:47.873940 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:48.101881 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:48.338397 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:48.357168 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:48.377443 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:48.555657 1469715 pod_ready.go:102] pod "coredns-7db6d8ff4d-b9xf7" in "kube-system" namespace has status "Ready":"False"
	I0520 10:26:48.611229 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:48.839389 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:48.859852 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:48.877457 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:49.104865 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:49.338914 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:49.355756 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:49.374554 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:49.553763 1469715 pod_ready.go:92] pod "coredns-7db6d8ff4d-b9xf7" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.553829 1469715 pod_ready.go:81] duration metric: took 3.006087336s for pod "coredns-7db6d8ff4d-b9xf7" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.553858 1469715 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.558976 1469715 pod_ready.go:92] pod "etcd-addons-091599" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.559002 1469715 pod_ready.go:81] duration metric: took 5.136115ms for pod "etcd-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.559017 1469715 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.564282 1469715 pod_ready.go:92] pod "kube-apiserver-addons-091599" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.564348 1469715 pod_ready.go:81] duration metric: took 5.322474ms for pod "kube-apiserver-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.564367 1469715 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.570345 1469715 pod_ready.go:92] pod "kube-controller-manager-addons-091599" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.570368 1469715 pod_ready.go:81] duration metric: took 5.992358ms for pod "kube-controller-manager-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.570382 1469715 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mxn9s" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.575686 1469715 pod_ready.go:92] pod "kube-proxy-mxn9s" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.575713 1469715 pod_ready.go:81] duration metric: took 5.305489ms for pod "kube-proxy-mxn9s" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.575725 1469715 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.602013 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:49.838692 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:49.855743 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:49.874125 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:49.951268 1469715 pod_ready.go:92] pod "kube-scheduler-addons-091599" in "kube-system" namespace has status "Ready":"True"
	I0520 10:26:49.951298 1469715 pod_ready.go:81] duration metric: took 375.564925ms for pod "kube-scheduler-addons-091599" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:49.951311 1469715 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace to be "Ready" ...
	I0520 10:26:50.106016 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:50.340229 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:50.356972 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:50.374234 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:50.621946 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:50.841744 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:50.856822 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:50.874380 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:51.101593 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:51.340157 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:51.357199 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:51.373573 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:51.606279 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:51.840064 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:51.856671 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:51.875093 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:51.959117 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:26:52.102176 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:52.339200 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:52.357035 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:52.373577 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:52.602913 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:52.839386 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:52.856303 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:52.874211 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:53.102008 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:53.339620 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:53.356099 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:53.373929 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:53.611329 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:53.838803 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:53.856834 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:53.874351 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:54.102543 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:54.339430 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:54.357376 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:54.381949 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:54.463692 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:26:54.602318 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:54.839577 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:54.856282 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:54.873555 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:55.100825 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:55.338532 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:55.360371 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:55.373585 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:55.611863 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:55.838460 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:55.857676 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:55.874829 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:56.103894 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:56.338891 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:56.362971 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:56.373990 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:56.610425 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:56.841405 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:56.855601 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:56.874831 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:56.958552 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:26:57.102216 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:57.338507 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:57.356414 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:57.373731 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:57.607339 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:57.839367 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:57.855648 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:57.874113 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:58.100419 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:58.338771 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:58.356168 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:58.373420 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:58.601479 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:58.838177 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:58.855267 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:58.873721 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:59.101815 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:59.346870 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:59.361635 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:59.374857 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:26:59.459810 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:26:59.609732 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:26:59.839480 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:26:59.856242 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:26:59.874458 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:00.151410 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:00.367885 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:00.368860 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:00.384973 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:00.610620 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:00.838700 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:00.856609 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:00.874360 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:01.102262 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:01.339185 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:01.356140 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:01.374251 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:01.618792 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:01.839011 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:01.856033 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:01.873985 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:01.958497 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:02.118710 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:02.338063 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:02.355506 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:02.373839 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:02.613690 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:02.839451 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:02.856455 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:02.875486 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:03.101800 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:03.338757 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:03.356554 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:03.374628 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:03.613265 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:03.838710 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:03.855774 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:03.874485 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:04.102832 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:04.338272 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:04.355852 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:04.374693 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:04.459281 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:04.605493 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:04.839506 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:04.856266 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:04.874304 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:05.103025 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:05.339358 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:05.356746 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:05.374868 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:05.632006 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:05.841528 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:05.857263 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:05.882459 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:06.102319 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:06.342001 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:06.357538 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:06.376299 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:06.604522 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:06.842080 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:06.855927 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:06.881545 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:06.961936 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:07.102409 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:07.343704 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:07.367603 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:07.378466 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:07.610573 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:07.838609 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:07.855441 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:07.874290 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:08.101047 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:08.338418 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:08.355303 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:08.374187 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:08.612673 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:08.838410 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:08.855710 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:08.874719 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:09.101204 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:09.338662 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:09.355255 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:09.373761 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:09.479970 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:09.610488 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:09.838916 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:09.856079 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:09.876119 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:10.101967 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:10.338720 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:10.356136 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:10.375353 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:10.612824 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:10.840778 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:10.857078 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:10.882429 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:11.104104 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:11.339699 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:11.365226 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:11.376224 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:11.620652 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:11.838923 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:11.858172 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:11.874989 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:11.958994 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:12.104433 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:12.339469 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:12.356360 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:12.375382 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:12.603859 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:12.840014 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:12.857622 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:12.877536 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:13.105355 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:13.339010 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:13.356902 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:13.378363 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:13.601855 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:13.839380 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:13.856331 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:13.874078 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:13.959386 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:14.101087 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:14.339277 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:14.355415 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:14.373840 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:14.607888 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:14.838690 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:14.855444 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:14.873777 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:15.106328 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:15.339108 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:15.356466 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:15.374620 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:15.609572 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:15.838633 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:15.856242 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:15.873917 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:16.101282 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:16.339818 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:16.361398 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:16.381419 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:16.458445 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:16.613095 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:16.840391 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:16.857737 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:16.875685 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:17.101423 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:17.340216 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:17.362996 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:17.388164 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:17.608798 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:17.841025 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:17.857413 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:17.878160 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:18.103468 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:18.339247 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:18.372334 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:18.427096 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:18.466712 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:18.629544 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:18.840767 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:18.858702 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:18.876652 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:19.104221 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:19.340019 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:19.361339 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:19.382164 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:19.627274 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:19.841034 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:19.856805 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:19.876630 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:20.102871 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:20.338439 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:20.360266 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:20.377904 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:20.607894 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:20.840624 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:20.857532 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:20.873764 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:20.958939 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:21.115337 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:21.338792 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:21.356576 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:21.373908 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:21.619218 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:21.839931 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:21.856857 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:21.873875 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:22.101373 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:22.340377 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:22.355873 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:22.374378 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:22.604802 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:22.838951 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:22.856671 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:22.885623 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:23.103529 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:23.339273 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:23.356085 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:23.374011 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:23.458691 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:23.601624 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:23.838622 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:23.855299 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:23.874022 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:24.100797 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:24.339726 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:24.355962 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:24.374740 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:24.609218 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:24.839601 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:24.859199 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:24.874628 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:25.101998 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:25.339271 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:25.355867 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:25.373966 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:25.602524 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:25.839868 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:25.857612 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:25.874667 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:25.971539 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:26.102084 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:26.339739 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:26.365608 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:26.375705 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:26.612028 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:26.841436 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:26.857587 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:26.884517 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:27.123741 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:27.338826 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:27.357550 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:27.374656 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:27.610402 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:27.838669 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:27.856894 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:27.874828 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:28.102533 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:28.339188 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:28.356418 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:28.373903 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:28.458117 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:28.608182 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:28.839537 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:28.856240 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:28.873556 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:29.102201 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:29.338599 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:29.355812 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:29.374403 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:27:29.600899 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:29.839397 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:29.856031 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:29.874423 1469715 kapi.go:107] duration metric: took 1m12.016233008s to wait for kubernetes.io/minikube-addons=registry ...
	I0520 10:27:30.101068 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:30.338451 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:30.356901 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:30.460664 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:30.614532 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:30.842934 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:30.857435 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:31.100759 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:31.339291 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:31.355704 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:31.602544 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:31.838001 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:31.857152 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:32.102113 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:32.339174 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:32.357899 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:32.606591 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:32.839752 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:32.856033 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:32.965982 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:33.101876 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:33.344675 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:27:33.356481 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:33.634168 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:33.845589 1469715 kapi.go:107] duration metric: took 1m11.510844281s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0520 10:27:33.855180 1469715 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-091599 cluster.
	I0520 10:27:33.864892 1469715 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0520 10:27:33.873611 1469715 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0520 10:27:33.881926 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:34.108621 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:34.356779 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:34.612343 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:34.856917 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:35.127005 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:35.367619 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:35.459867 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:35.601540 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:35.856781 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:36.102258 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:36.355941 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:36.611114 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:36.878009 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:37.107420 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:37.356399 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:37.605011 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:37.859179 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:37.957228 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:38.107203 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:38.355856 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:38.600811 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:38.856121 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:39.101960 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:39.357068 1469715 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:27:39.606054 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:39.861805 1469715 kapi.go:107] duration metric: took 1m22.010661339s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0520 10:27:39.958064 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:40.105873 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:40.606303 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:41.101089 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:41.605969 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:42.101979 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:42.457835 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:42.604314 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:43.100480 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:43.602227 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:44.100844 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:44.459743 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:44.603761 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:45.102707 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:45.608930 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:46.101701 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:46.602617 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:46.957842 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:47.101866 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:47.605766 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:48.103993 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:48.600910 1469715 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:27:48.958039 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:49.101611 1469715 kapi.go:107] duration metric: took 1m30.506430912s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0520 10:27:49.103529 1469715 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0520 10:27:49.105119 1469715 addons.go:505] duration metric: took 1m37.873709639s for enable addons: enabled=[ingress-dns nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0520 10:27:51.458631 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:53.957414 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:55.957954 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:27:57.960888 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:00.464369 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:02.957313 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:04.958859 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:07.457951 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:09.957982 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:12.456914 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:14.458286 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:16.957076 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:18.957832 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:20.958011 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:22.959167 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:25.457915 1469715 pod_ready.go:102] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"False"
	I0520 10:28:27.457828 1469715 pod_ready.go:92] pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace has status "Ready":"True"
	I0520 10:28:27.457856 1469715 pod_ready.go:81] duration metric: took 1m37.50653712s for pod "metrics-server-c59844bb4-2952v" in "kube-system" namespace to be "Ready" ...
	I0520 10:28:27.457869 1469715 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xt86b" in "kube-system" namespace to be "Ready" ...
	I0520 10:28:27.462993 1469715 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-xt86b" in "kube-system" namespace has status "Ready":"True"
	I0520 10:28:27.463018 1469715 pod_ready.go:81] duration metric: took 5.141317ms for pod "nvidia-device-plugin-daemonset-xt86b" in "kube-system" namespace to be "Ready" ...
	I0520 10:28:27.463038 1469715 pod_ready.go:38] duration metric: took 1m40.94182784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:28:27.463052 1469715 api_server.go:52] waiting for apiserver process to appear ...
	I0520 10:28:27.463085 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:28:27.463149 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:28:27.514875 1469715 cri.go:89] found id: "733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:27.514900 1469715 cri.go:89] found id: ""
	I0520 10:28:27.514908 1469715 logs.go:276] 1 containers: [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b]
	I0520 10:28:27.514966 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.518734 1469715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:28:27.518810 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:28:27.562055 1469715 cri.go:89] found id: "afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:27.562079 1469715 cri.go:89] found id: ""
	I0520 10:28:27.562088 1469715 logs.go:276] 1 containers: [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3]
	I0520 10:28:27.562152 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.565394 1469715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:28:27.565474 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:28:27.606175 1469715 cri.go:89] found id: "a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:27.606198 1469715 cri.go:89] found id: ""
	I0520 10:28:27.606207 1469715 logs.go:276] 1 containers: [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b]
	I0520 10:28:27.606263 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.609904 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:28:27.609983 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:28:27.653357 1469715 cri.go:89] found id: "cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:27.653382 1469715 cri.go:89] found id: ""
	I0520 10:28:27.653390 1469715 logs.go:276] 1 containers: [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea]
	I0520 10:28:27.653454 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.657033 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:28:27.657107 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:28:27.694857 1469715 cri.go:89] found id: "8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:27.694881 1469715 cri.go:89] found id: ""
	I0520 10:28:27.694889 1469715 logs.go:276] 1 containers: [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65]
	I0520 10:28:27.694988 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.698305 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:28:27.698385 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:28:27.739425 1469715 cri.go:89] found id: "417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:27.739446 1469715 cri.go:89] found id: ""
	I0520 10:28:27.739454 1469715 logs.go:276] 1 containers: [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77]
	I0520 10:28:27.739512 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.742999 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:28:27.743076 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:28:27.785811 1469715 cri.go:89] found id: "eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:27.785838 1469715 cri.go:89] found id: ""
	I0520 10:28:27.785846 1469715 logs.go:276] 1 containers: [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da]
	I0520 10:28:27.785905 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:27.789365 1469715 logs.go:123] Gathering logs for kube-controller-manager [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77] ...
	I0520 10:28:27.789391 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:27.861032 1469715 logs.go:123] Gathering logs for kindnet [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da] ...
	I0520 10:28:27.861070 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:27.898675 1469715 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:28:27.898704 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:28:27.990785 1469715 logs.go:123] Gathering logs for container status ...
	I0520 10:28:27.990823 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:28:28.041944 1469715 logs.go:123] Gathering logs for kube-scheduler [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea] ...
	I0520 10:28:28.041976 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:28.085430 1469715 logs.go:123] Gathering logs for kube-proxy [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65] ...
	I0520 10:28:28.085467 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:28.135190 1469715 logs.go:123] Gathering logs for kubelet ...
	I0520 10:28:28.135220 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:28:28.185758 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.185971 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.188120 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.188331 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:28.224727 1469715 logs.go:123] Gathering logs for dmesg ...
	I0520 10:28:28.224758 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:28:28.244015 1469715 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:28:28.244050 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:28:28.409101 1469715 logs.go:123] Gathering logs for kube-apiserver [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b] ...
	I0520 10:28:28.409147 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:28.474928 1469715 logs.go:123] Gathering logs for etcd [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3] ...
	I0520 10:28:28.474960 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:28.531097 1469715 logs.go:123] Gathering logs for coredns [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b] ...
	I0520 10:28:28.531130 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:28.582043 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:28.582069 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:28:28.582128 1469715 out.go:239] X Problems detected in kubelet:
	W0520 10:28:28.582137 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.582144 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.582155 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:28.582166 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:28.582173 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:28.582178 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:28:38.583454 1469715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:28:38.599051 1469715 api_server.go:72] duration metric: took 2m27.367883564s to wait for apiserver process to appear ...
	I0520 10:28:38.599078 1469715 api_server.go:88] waiting for apiserver healthz status ...
	I0520 10:28:38.599112 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:28:38.599176 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:28:38.649407 1469715 cri.go:89] found id: "733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:38.649438 1469715 cri.go:89] found id: ""
	I0520 10:28:38.649445 1469715 logs.go:276] 1 containers: [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b]
	I0520 10:28:38.649502 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.653172 1469715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:28:38.653253 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:28:38.692978 1469715 cri.go:89] found id: "afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:38.693000 1469715 cri.go:89] found id: ""
	I0520 10:28:38.693008 1469715 logs.go:276] 1 containers: [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3]
	I0520 10:28:38.693072 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.696539 1469715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:28:38.696609 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:28:38.736066 1469715 cri.go:89] found id: "a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:38.736190 1469715 cri.go:89] found id: ""
	I0520 10:28:38.736206 1469715 logs.go:276] 1 containers: [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b]
	I0520 10:28:38.736297 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.740367 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:28:38.740449 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:28:38.781005 1469715 cri.go:89] found id: "cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:38.781073 1469715 cri.go:89] found id: ""
	I0520 10:28:38.781096 1469715 logs.go:276] 1 containers: [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea]
	I0520 10:28:38.781168 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.784808 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:28:38.784882 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:28:38.822327 1469715 cri.go:89] found id: "8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:38.822395 1469715 cri.go:89] found id: ""
	I0520 10:28:38.822411 1469715 logs.go:276] 1 containers: [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65]
	I0520 10:28:38.822480 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.826032 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:28:38.826108 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:28:38.866457 1469715 cri.go:89] found id: "417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:38.866480 1469715 cri.go:89] found id: ""
	I0520 10:28:38.866488 1469715 logs.go:276] 1 containers: [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77]
	I0520 10:28:38.866544 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.870103 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:28:38.870179 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:28:38.908330 1469715 cri.go:89] found id: "eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:38.908353 1469715 cri.go:89] found id: ""
	I0520 10:28:38.908361 1469715 logs.go:276] 1 containers: [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da]
	I0520 10:28:38.908476 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:38.911923 1469715 logs.go:123] Gathering logs for etcd [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3] ...
	I0520 10:28:38.911953 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:38.984383 1469715 logs.go:123] Gathering logs for coredns [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b] ...
	I0520 10:28:38.984421 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:39.036586 1469715 logs.go:123] Gathering logs for kube-scheduler [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea] ...
	I0520 10:28:39.036617 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:39.087722 1469715 logs.go:123] Gathering logs for kube-controller-manager [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77] ...
	I0520 10:28:39.087759 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:39.161394 1469715 logs.go:123] Gathering logs for kindnet [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da] ...
	I0520 10:28:39.161429 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:39.202412 1469715 logs.go:123] Gathering logs for container status ...
	I0520 10:28:39.202442 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:28:39.251600 1469715 logs.go:123] Gathering logs for kubelet ...
	I0520 10:28:39.251631 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:28:39.293885 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.294101 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.296222 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.296431 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:39.334382 1469715 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:28:39.334413 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:28:39.464764 1469715 logs.go:123] Gathering logs for kube-apiserver [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b] ...
	I0520 10:28:39.464797 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:39.541526 1469715 logs.go:123] Gathering logs for kube-proxy [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65] ...
	I0520 10:28:39.541563 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:39.582495 1469715 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:28:39.582523 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:28:39.678846 1469715 logs.go:123] Gathering logs for dmesg ...
	I0520 10:28:39.678883 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:28:39.697928 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:39.697956 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:28:39.698012 1469715 out.go:239] X Problems detected in kubelet:
	W0520 10:28:39.698025 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.698032 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.698044 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:39.698053 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:39.698060 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:39.698072 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:28:49.699283 1469715 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0520 10:28:49.706842 1469715 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0520 10:28:49.707926 1469715 api_server.go:141] control plane version: v1.30.1
	I0520 10:28:49.707950 1469715 api_server.go:131] duration metric: took 11.108865148s to wait for apiserver health ...
	I0520 10:28:49.707960 1469715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 10:28:49.707979 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:28:49.708046 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:28:49.747670 1469715 cri.go:89] found id: "733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:49.747693 1469715 cri.go:89] found id: ""
	I0520 10:28:49.747701 1469715 logs.go:276] 1 containers: [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b]
	I0520 10:28:49.747762 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.751183 1469715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:28:49.751262 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:28:49.800402 1469715 cri.go:89] found id: "afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:49.800421 1469715 cri.go:89] found id: ""
	I0520 10:28:49.800429 1469715 logs.go:276] 1 containers: [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3]
	I0520 10:28:49.800491 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.804226 1469715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:28:49.804295 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:28:49.849387 1469715 cri.go:89] found id: "a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:49.849413 1469715 cri.go:89] found id: ""
	I0520 10:28:49.849421 1469715 logs.go:276] 1 containers: [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b]
	I0520 10:28:49.849497 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.853716 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:28:49.853826 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:28:49.897908 1469715 cri.go:89] found id: "cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:49.897932 1469715 cri.go:89] found id: ""
	I0520 10:28:49.897940 1469715 logs.go:276] 1 containers: [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea]
	I0520 10:28:49.897996 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.901331 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:28:49.901463 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:28:49.939367 1469715 cri.go:89] found id: "8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:49.939389 1469715 cri.go:89] found id: ""
	I0520 10:28:49.939397 1469715 logs.go:276] 1 containers: [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65]
	I0520 10:28:49.939473 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.942930 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:28:49.943012 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:28:49.980709 1469715 cri.go:89] found id: "417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:49.980734 1469715 cri.go:89] found id: ""
	I0520 10:28:49.980743 1469715 logs.go:276] 1 containers: [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77]
	I0520 10:28:49.980801 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:49.985356 1469715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:28:49.985429 1469715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:28:50.033391 1469715 cri.go:89] found id: "eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:50.033415 1469715 cri.go:89] found id: ""
	I0520 10:28:50.033425 1469715 logs.go:276] 1 containers: [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da]
	I0520 10:28:50.033508 1469715 ssh_runner.go:195] Run: which crictl
	I0520 10:28:50.037803 1469715 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:28:50.037851 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:28:50.178515 1469715 logs.go:123] Gathering logs for kube-apiserver [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b] ...
	I0520 10:28:50.178599 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b"
	I0520 10:28:50.261689 1469715 logs.go:123] Gathering logs for etcd [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3] ...
	I0520 10:28:50.261764 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3"
	I0520 10:28:50.329567 1469715 logs.go:123] Gathering logs for coredns [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b] ...
	I0520 10:28:50.329602 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b"
	I0520 10:28:50.366482 1469715 logs.go:123] Gathering logs for kube-scheduler [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea] ...
	I0520 10:28:50.366513 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea"
	I0520 10:28:50.413015 1469715 logs.go:123] Gathering logs for kindnet [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da] ...
	I0520 10:28:50.413046 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da"
	I0520 10:28:50.456114 1469715 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:28:50.456140 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:28:50.550222 1469715 logs.go:123] Gathering logs for kubelet ...
	I0520 10:28:50.550307 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:28:50.599723 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.599934 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.602164 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.602374 1469715 logs.go:138] Found kubelet problem: May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:50.641841 1469715 logs.go:123] Gathering logs for container status ...
	I0520 10:28:50.641874 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:28:50.689806 1469715 logs.go:123] Gathering logs for kube-proxy [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65] ...
	I0520 10:28:50.689838 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65"
	I0520 10:28:50.727265 1469715 logs.go:123] Gathering logs for kube-controller-manager [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77] ...
	I0520 10:28:50.727300 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77"
	I0520 10:28:50.794084 1469715 logs.go:123] Gathering logs for dmesg ...
	I0520 10:28:50.794122 1469715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:28:50.813350 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:50.813382 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:28:50.813430 1469715 out.go:239] X Problems detected in kubelet:
	W0520 10:28:50.813444 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.429641    1492 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.813451 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.429704    1492 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.813460 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: W0520 10:26:46.482578    1492 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	W0520 10:28:50.813471 1469715 out.go:239]   May 20 10:26:46 addons-091599 kubelet[1492]: E0520 10:26:46.482633    1492 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-091599" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-091599' and this object
	I0520 10:28:50.813477 1469715 out.go:304] Setting ErrFile to fd 2...
	I0520 10:28:50.813483 1469715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:29:00.832565 1469715 system_pods.go:59] 18 kube-system pods found
	I0520 10:29:00.832603 1469715 system_pods.go:61] "coredns-7db6d8ff4d-b9xf7" [6e2fbc19-14ad-48d3-9d75-cada8ca050cd] Running
	I0520 10:29:00.832610 1469715 system_pods.go:61] "csi-hostpath-attacher-0" [74986f6c-64f5-4633-91fa-e5f741e5a472] Running
	I0520 10:29:00.832615 1469715 system_pods.go:61] "csi-hostpath-resizer-0" [f101e109-8cf4-45fb-88bd-fb4f2c9b864b] Running
	I0520 10:29:00.832639 1469715 system_pods.go:61] "csi-hostpathplugin-29tk8" [7d24b514-c559-45cc-bf58-48fc804aba64] Running
	I0520 10:29:00.832650 1469715 system_pods.go:61] "etcd-addons-091599" [578d79c2-858b-40c4-b5dc-323248721eb9] Running
	I0520 10:29:00.832656 1469715 system_pods.go:61] "kindnet-46ck5" [081ed86e-80d3-418e-96ee-eed890edcef1] Running
	I0520 10:29:00.832663 1469715 system_pods.go:61] "kube-apiserver-addons-091599" [f950a9c9-5f3b-4719-96f4-c3cc19a9244c] Running
	I0520 10:29:00.832667 1469715 system_pods.go:61] "kube-controller-manager-addons-091599" [7c254f12-fd04-41dc-a93f-8bb4450ddfc1] Running
	I0520 10:29:00.832679 1469715 system_pods.go:61] "kube-ingress-dns-minikube" [5165966d-7976-41d5-aeda-453818f053d6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0520 10:29:00.832688 1469715 system_pods.go:61] "kube-proxy-mxn9s" [62fa87b1-b9ee-49b2-bdf5-c453888491fe] Running
	I0520 10:29:00.832693 1469715 system_pods.go:61] "kube-scheduler-addons-091599" [e2982c39-66fa-471e-8738-fa5b24fa2577] Running
	I0520 10:29:00.832696 1469715 system_pods.go:61] "metrics-server-c59844bb4-2952v" [b05bfa4c-b71e-4ba3-82ec-ef3604433ba9] Running
	I0520 10:29:00.832700 1469715 system_pods.go:61] "nvidia-device-plugin-daemonset-xt86b" [e96a5492-ba66-4969-aaa2-03c1ea00e071] Running
	I0520 10:29:00.832726 1469715 system_pods.go:61] "registry-c9mld" [2c38d8b7-c7e2-4b49-a2c6-ce2a95367d53] Running
	I0520 10:29:00.832745 1469715 system_pods.go:61] "registry-proxy-2mv7g" [4c0da18b-a7b2-46aa-9e52-c5273f77fb67] Running
	I0520 10:29:00.832750 1469715 system_pods.go:61] "snapshot-controller-745499f584-b2m64" [1b65aa38-6b40-4c44-b1ea-f996d39e17d5] Running
	I0520 10:29:00.832753 1469715 system_pods.go:61] "snapshot-controller-745499f584-wsxwq" [962657a7-4a31-4fa2-bd12-e9ed25e89f37] Running
	I0520 10:29:00.832757 1469715 system_pods.go:61] "storage-provisioner" [f3bdda63-6ec2-4c3b-a250-090f43416d4d] Running
	I0520 10:29:00.832764 1469715 system_pods.go:74] duration metric: took 11.12479785s to wait for pod list to return data ...
	I0520 10:29:00.832776 1469715 default_sa.go:34] waiting for default service account to be created ...
	I0520 10:29:00.835262 1469715 default_sa.go:45] found service account: "default"
	I0520 10:29:00.835295 1469715 default_sa.go:55] duration metric: took 2.511771ms for default service account to be created ...
	I0520 10:29:00.835306 1469715 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 10:29:00.845848 1469715 system_pods.go:86] 18 kube-system pods found
	I0520 10:29:00.845886 1469715 system_pods.go:89] "coredns-7db6d8ff4d-b9xf7" [6e2fbc19-14ad-48d3-9d75-cada8ca050cd] Running
	I0520 10:29:00.845893 1469715 system_pods.go:89] "csi-hostpath-attacher-0" [74986f6c-64f5-4633-91fa-e5f741e5a472] Running
	I0520 10:29:00.845898 1469715 system_pods.go:89] "csi-hostpath-resizer-0" [f101e109-8cf4-45fb-88bd-fb4f2c9b864b] Running
	I0520 10:29:00.845902 1469715 system_pods.go:89] "csi-hostpathplugin-29tk8" [7d24b514-c559-45cc-bf58-48fc804aba64] Running
	I0520 10:29:00.845906 1469715 system_pods.go:89] "etcd-addons-091599" [578d79c2-858b-40c4-b5dc-323248721eb9] Running
	I0520 10:29:00.845910 1469715 system_pods.go:89] "kindnet-46ck5" [081ed86e-80d3-418e-96ee-eed890edcef1] Running
	I0520 10:29:00.845914 1469715 system_pods.go:89] "kube-apiserver-addons-091599" [f950a9c9-5f3b-4719-96f4-c3cc19a9244c] Running
	I0520 10:29:00.845918 1469715 system_pods.go:89] "kube-controller-manager-addons-091599" [7c254f12-fd04-41dc-a93f-8bb4450ddfc1] Running
	I0520 10:29:00.845929 1469715 system_pods.go:89] "kube-ingress-dns-minikube" [5165966d-7976-41d5-aeda-453818f053d6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0520 10:29:00.845938 1469715 system_pods.go:89] "kube-proxy-mxn9s" [62fa87b1-b9ee-49b2-bdf5-c453888491fe] Running
	I0520 10:29:00.845952 1469715 system_pods.go:89] "kube-scheduler-addons-091599" [e2982c39-66fa-471e-8738-fa5b24fa2577] Running
	I0520 10:29:00.845956 1469715 system_pods.go:89] "metrics-server-c59844bb4-2952v" [b05bfa4c-b71e-4ba3-82ec-ef3604433ba9] Running
	I0520 10:29:00.845960 1469715 system_pods.go:89] "nvidia-device-plugin-daemonset-xt86b" [e96a5492-ba66-4969-aaa2-03c1ea00e071] Running
	I0520 10:29:00.845968 1469715 system_pods.go:89] "registry-c9mld" [2c38d8b7-c7e2-4b49-a2c6-ce2a95367d53] Running
	I0520 10:29:00.845972 1469715 system_pods.go:89] "registry-proxy-2mv7g" [4c0da18b-a7b2-46aa-9e52-c5273f77fb67] Running
	I0520 10:29:00.845976 1469715 system_pods.go:89] "snapshot-controller-745499f584-b2m64" [1b65aa38-6b40-4c44-b1ea-f996d39e17d5] Running
	I0520 10:29:00.845983 1469715 system_pods.go:89] "snapshot-controller-745499f584-wsxwq" [962657a7-4a31-4fa2-bd12-e9ed25e89f37] Running
	I0520 10:29:00.845987 1469715 system_pods.go:89] "storage-provisioner" [f3bdda63-6ec2-4c3b-a250-090f43416d4d] Running
	I0520 10:29:00.846002 1469715 system_pods.go:126] duration metric: took 10.671376ms to wait for k8s-apps to be running ...
	I0520 10:29:00.846010 1469715 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 10:29:00.846075 1469715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:29:00.858741 1469715 system_svc.go:56] duration metric: took 12.721914ms WaitForService to wait for kubelet
	I0520 10:29:00.858772 1469715 kubeadm.go:576] duration metric: took 2m49.6276108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:29:00.858793 1469715 node_conditions.go:102] verifying NodePressure condition ...
	I0520 10:29:00.861821 1469715 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0520 10:29:00.861854 1469715 node_conditions.go:123] node cpu capacity is 2
	I0520 10:29:00.861867 1469715 node_conditions.go:105] duration metric: took 3.069183ms to run NodePressure ...
	I0520 10:29:00.861879 1469715 start.go:240] waiting for startup goroutines ...
	I0520 10:29:00.861887 1469715 start.go:245] waiting for cluster config update ...
	I0520 10:29:00.861912 1469715 start.go:254] writing updated cluster config ...
	I0520 10:29:00.862220 1469715 ssh_runner.go:195] Run: rm -f paused
	I0520 10:29:01.190560 1469715 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 10:29:01.194703 1469715 out.go:177] * Done! kubectl is now configured to use "addons-091599" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 10:33:58 addons-091599 crio[906]: time="2024-05-20 10:33:58.426782384Z" level=info msg="Stopped pod sandbox (already stopped): 5867397c6b51c97270f08e917ef1bb5c2f1123a2cc7c5f3faa9cb7c7c98f72f1" id=fa4f2eaf-282c-4d2b-b0da-1a9cc58ab12c name=/runtime.v1.RuntimeService/StopPodSandbox
	May 20 10:33:58 addons-091599 crio[906]: time="2024-05-20 10:33:58.427118393Z" level=info msg="Removing pod sandbox: 5867397c6b51c97270f08e917ef1bb5c2f1123a2cc7c5f3faa9cb7c7c98f72f1" id=f42ddada-7169-4eee-8cef-be9611e51864 name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 20 10:33:58 addons-091599 crio[906]: time="2024-05-20 10:33:58.437752214Z" level=info msg="Removed pod sandbox: 5867397c6b51c97270f08e917ef1bb5c2f1123a2cc7c5f3faa9cb7c7c98f72f1" id=f42ddada-7169-4eee-8cef-be9611e51864 name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 20 10:33:58 addons-091599 crio[906]: time="2024-05-20 10:33:58.438370088Z" level=info msg="Stopping pod sandbox: 0c56eefc101188b839d05d4b76315f0c5d03e21a6a54de19c80ab6d45f884bc4" id=c15e5694-b615-4eb1-9018-fbb3dda3b696 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 20 10:33:58 addons-091599 crio[906]: time="2024-05-20 10:33:58.438404983Z" level=info msg="Stopped pod sandbox (already stopped): 0c56eefc101188b839d05d4b76315f0c5d03e21a6a54de19c80ab6d45f884bc4" id=c15e5694-b615-4eb1-9018-fbb3dda3b696 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 20 10:33:58 addons-091599 crio[906]: time="2024-05-20 10:33:58.438758526Z" level=info msg="Removing pod sandbox: 0c56eefc101188b839d05d4b76315f0c5d03e21a6a54de19c80ab6d45f884bc4" id=49f50024-b7d6-428e-8ff0-402417494873 name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 20 10:33:58 addons-091599 crio[906]: time="2024-05-20 10:33:58.447015942Z" level=info msg="Removed pod sandbox: 0c56eefc101188b839d05d4b76315f0c5d03e21a6a54de19c80ab6d45f884bc4" id=49f50024-b7d6-428e-8ff0-402417494873 name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 20 10:34:19 addons-091599 crio[906]: time="2024-05-20 10:34:19.981701090Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=eca03175-42a8-45fb-9fe9-69d3e82516ec name=/runtime.v1.ImageService/ImageStatus
	May 20 10:34:19 addons-091599 crio[906]: time="2024-05-20 10:34:19.981921155Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=eca03175-42a8-45fb-9fe9-69d3e82516ec name=/runtime.v1.ImageService/ImageStatus
	May 20 10:34:19 addons-091599 crio[906]: time="2024-05-20 10:34:19.982661126Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=4acf5b85-af07-4ad3-b8f7-c28fc44bda3b name=/runtime.v1.ImageService/ImageStatus
	May 20 10:34:19 addons-091599 crio[906]: time="2024-05-20 10:34:19.982818759Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=4acf5b85-af07-4ad3-b8f7-c28fc44bda3b name=/runtime.v1.ImageService/ImageStatus
	May 20 10:34:19 addons-091599 crio[906]: time="2024-05-20 10:34:19.983498277Z" level=info msg="Creating container: default/hello-world-app-86c47465fc-6r6ll/hello-world-app" id=f7765650-79ca-4aa6-afaf-893176fae56a name=/runtime.v1.RuntimeService/CreateContainer
	May 20 10:34:19 addons-091599 crio[906]: time="2024-05-20 10:34:19.983598943Z" level=warning msg="Allowed annotations are specified for workload []"
	May 20 10:34:20 addons-091599 crio[906]: time="2024-05-20 10:34:20.066220660Z" level=info msg="Created container 47c1c0076f011763a287ba5818c293efc59feb247c5a799ae8cf917d618ceb6e: default/hello-world-app-86c47465fc-6r6ll/hello-world-app" id=f7765650-79ca-4aa6-afaf-893176fae56a name=/runtime.v1.RuntimeService/CreateContainer
	May 20 10:34:20 addons-091599 crio[906]: time="2024-05-20 10:34:20.067098852Z" level=info msg="Starting container: 47c1c0076f011763a287ba5818c293efc59feb247c5a799ae8cf917d618ceb6e" id=70432ef9-a409-4ce7-8278-c0db1ad7ce54 name=/runtime.v1.RuntimeService/StartContainer
	May 20 10:34:20 addons-091599 crio[906]: time="2024-05-20 10:34:20.075397875Z" level=info msg="Started container" PID=8838 containerID=47c1c0076f011763a287ba5818c293efc59feb247c5a799ae8cf917d618ceb6e description=default/hello-world-app-86c47465fc-6r6ll/hello-world-app id=70432ef9-a409-4ce7-8278-c0db1ad7ce54 name=/runtime.v1.RuntimeService/StartContainer sandboxID=89c71ba72dc6f4f0d7037eca4b4da6b3e9f384838ea309a934be19a18dec3590
	May 20 10:34:20 addons-091599 conmon[8827]: conmon 47c1c0076f011763a287 <ninfo>: container 8838 exited with status 1
	May 20 10:34:20 addons-091599 crio[906]: time="2024-05-20 10:34:20.301300080Z" level=info msg="Removing container: 5a6874f91b07d22841ac9b0ed42bc6b79d744aa7968be5535ead8a4e79979a0b" id=23c3f820-228a-4368-bdf8-41e0fe05e424 name=/runtime.v1.RuntimeService/RemoveContainer
	May 20 10:34:20 addons-091599 crio[906]: time="2024-05-20 10:34:20.319206732Z" level=info msg="Removed container 5a6874f91b07d22841ac9b0ed42bc6b79d744aa7968be5535ead8a4e79979a0b: default/hello-world-app-86c47465fc-6r6ll/hello-world-app" id=23c3f820-228a-4368-bdf8-41e0fe05e424 name=/runtime.v1.RuntimeService/RemoveContainer
	May 20 10:35:28 addons-091599 crio[906]: time="2024-05-20 10:35:28.736866503Z" level=info msg="Stopping container: 02f9a1a4f7601a1e52b81b2726492153fb6869934090af5ec45db1d64aa54e9f (timeout: 30s)" id=55906b35-f518-456d-a4f8-22b8c27d2a37 name=/runtime.v1.RuntimeService/StopContainer
	May 20 10:35:29 addons-091599 crio[906]: time="2024-05-20 10:35:29.923421042Z" level=info msg="Stopped container 02f9a1a4f7601a1e52b81b2726492153fb6869934090af5ec45db1d64aa54e9f: kube-system/metrics-server-c59844bb4-2952v/metrics-server" id=55906b35-f518-456d-a4f8-22b8c27d2a37 name=/runtime.v1.RuntimeService/StopContainer
	May 20 10:35:29 addons-091599 crio[906]: time="2024-05-20 10:35:29.924011789Z" level=info msg="Stopping pod sandbox: 3e7dbb37eb4182f59d1ec784b89b9b5e24a869a00aaeacf9893abd70a2f42492" id=47010246-2895-4e60-b3a5-d9a97bb766ef name=/runtime.v1.RuntimeService/StopPodSandbox
	May 20 10:35:29 addons-091599 crio[906]: time="2024-05-20 10:35:29.924273429Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-2952v Namespace:kube-system ID:3e7dbb37eb4182f59d1ec784b89b9b5e24a869a00aaeacf9893abd70a2f42492 UID:b05bfa4c-b71e-4ba3-82ec-ef3604433ba9 NetNS:/var/run/netns/888ca026-0e5f-428d-bd25-e678924deac3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 20 10:35:29 addons-091599 crio[906]: time="2024-05-20 10:35:29.924430603Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-2952v from CNI network \"kindnet\" (type=ptp)"
	May 20 10:35:29 addons-091599 crio[906]: time="2024-05-20 10:35:29.967880732Z" level=info msg="Stopped pod sandbox: 3e7dbb37eb4182f59d1ec784b89b9b5e24a869a00aaeacf9893abd70a2f42492" id=47010246-2895-4e60-b3a5-d9a97bb766ef name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	47c1c0076f011       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                        About a minute ago   Exited              hello-world-app           4                   89c71ba72dc6f       hello-world-app-86c47465fc-6r6ll
	8453a37ffff10       docker.io/library/nginx@sha256:05325b3a32db871dc396a859d9a9609d75f50d2f7ad12194f9f3a550111bdcaa                         5 minutes ago        Running             nginx                     0                   639c326bab448       nginx
	48ed74134a932       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                   6 minutes ago        Running             headlamp                  0                   7054d05485d2a       headlamp-68456f997b-kzpsq
	48a7b240a2508       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            7 minutes ago        Running             gcp-auth                  0                   d45942f337131       gcp-auth-5db96cd9b4-tpqqv
	02f9a1a4f7601       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago        Exited              metrics-server            0                   3e7dbb37eb418       metrics-server-c59844bb4-2952v
	ba27e2e0ccc2d       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         8 minutes ago        Running             yakd                      0                   cd9fc24a11df7       yakd-dashboard-5ddbf7d777-zk8ph
	a4f3accb83dd9       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago        Running             coredns                   0                   25e1db186e60d       coredns-7db6d8ff4d-b9xf7
	0245d6608194b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago        Running             storage-provisioner       0                   edbda11fa8f1b       storage-provisioner
	8c5f80237ca50       05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee                                                        9 minutes ago        Running             kube-proxy                0                   30b7d1c053dec       kube-proxy-mxn9s
	eacd599cd704c       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                        9 minutes ago        Running             kindnet-cni               0                   aa5db3edaf56f       kindnet-46ck5
	afcbbf4b5b7a4       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        9 minutes ago        Running             etcd                      0                   f9cef73b37f03       etcd-addons-091599
	cd0f27c747443       163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a                                                        9 minutes ago        Running             kube-scheduler            0                   c23d3b94371cd       kube-scheduler-addons-091599
	733d7717e335e       988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee                                                        9 minutes ago        Running             kube-apiserver            0                   d7ef59cfc9fcc       kube-apiserver-addons-091599
	417ff80330879       234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4                                                        9 minutes ago        Running             kube-controller-manager   0                   926131727e20d       kube-controller-manager-addons-091599
	
	
	==> coredns [a4f3accb83dd9aab9ed6e33bdae4f73216c1985604e13bd8608050a9c4f0070b] <==
	[INFO] 10.244.0.20:51206 - 35368 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048573s
	[INFO] 10.244.0.20:51206 - 50867 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062259s
	[INFO] 10.244.0.20:35306 - 24989 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002379122s
	[INFO] 10.244.0.20:51206 - 59263 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001380375s
	[INFO] 10.244.0.20:51206 - 54430 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001102638s
	[INFO] 10.244.0.20:35306 - 62106 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000590288s
	[INFO] 10.244.0.20:51206 - 62475 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051002s
	[INFO] 10.244.0.20:36704 - 56449 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000119251s
	[INFO] 10.244.0.20:52848 - 13990 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000171738s
	[INFO] 10.244.0.20:52848 - 64641 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00005544s
	[INFO] 10.244.0.20:36704 - 27944 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065943s
	[INFO] 10.244.0.20:36704 - 55226 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054973s
	[INFO] 10.244.0.20:52848 - 824 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046719s
	[INFO] 10.244.0.20:52848 - 37686 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046924s
	[INFO] 10.244.0.20:52848 - 54635 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041451s
	[INFO] 10.244.0.20:52848 - 37699 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050141s
	[INFO] 10.244.0.20:36704 - 29272 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000342499s
	[INFO] 10.244.0.20:36704 - 24939 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000191011s
	[INFO] 10.244.0.20:52848 - 53833 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001304061s
	[INFO] 10.244.0.20:36704 - 61187 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067002s
	[INFO] 10.244.0.20:52848 - 34392 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001235214s
	[INFO] 10.244.0.20:36704 - 63626 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001428752s
	[INFO] 10.244.0.20:52848 - 16330 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055432s
	[INFO] 10.244.0.20:36704 - 21264 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001107758s
	[INFO] 10.244.0.20:36704 - 51671 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050288s
	
	
	==> describe nodes <==
	Name:               addons-091599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-091599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=addons-091599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T10_25_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-091599
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:25:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-091599
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:35:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:33:06 +0000   Mon, 20 May 2024 10:25:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:33:06 +0000   Mon, 20 May 2024 10:25:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:33:06 +0000   Mon, 20 May 2024 10:25:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:33:06 +0000   Mon, 20 May 2024 10:26:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-091599
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 51b9f74756bb4cda9ea81c779a1d1fc0
	  System UUID:                4e008f60-cdc7-4895-a474-c1c9872af671
	  Boot ID:                    df9684e8-d429-41b3-8a9f-ef96b9c9133b
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-6r6ll         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  gcp-auth                    gcp-auth-5db96cd9b4-tpqqv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  headlamp                    headlamp-68456f997b-kzpsq                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 coredns-7db6d8ff4d-b9xf7                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m17s
	  kube-system                 etcd-addons-091599                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m32s
	  kube-system                 kindnet-46ck5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m18s
	  kube-system                 kube-apiserver-addons-091599             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-controller-manager-addons-091599    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-proxy-mxn9s                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-scheduler-addons-091599             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-zk8ph          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m39s (x8 over 9m39s)  kubelet          Node addons-091599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m39s (x8 over 9m39s)  kubelet          Node addons-091599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m39s (x8 over 9m39s)  kubelet          Node addons-091599 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m32s                  kubelet          Node addons-091599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m32s                  kubelet          Node addons-091599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m32s                  kubelet          Node addons-091599 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m20s                  node-controller  Node addons-091599 event: Registered Node addons-091599 in Controller
	  Normal  NodeReady                8m44s                  kubelet          Node addons-091599 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000971] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=0000000088505b0d
	[  +0.001079] FS-Cache: N-key=[8] '9a823b0000000000'
	[  +0.005246] FS-Cache: Duplicate cookie detected
	[  +0.000764] FS-Cache: O-cookie c=0000016e [p=0000016b fl=226 nc=0 na=1]
	[  +0.001034] FS-Cache: O-cookie d=00000000865130fc{9p.inode} n=00000000c223b28b
	[  +0.001098] FS-Cache: O-key=[8] '9a823b0000000000'
	[  +0.000735] FS-Cache: N-cookie c=00000175 [p=0000016b fl=2 nc=0 na=1]
	[  +0.001050] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=00000000d2aa7710
	[  +0.001064] FS-Cache: N-key=[8] '9a823b0000000000'
	[  +2.844064] FS-Cache: Duplicate cookie detected
	[  +0.000773] FS-Cache: O-cookie c=0000016c [p=0000016b fl=226 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=00000000865130fc{9p.inode} n=00000000e950dce3
	[  +0.001203] FS-Cache: O-key=[8] '99823b0000000000'
	[  +0.000830] FS-Cache: N-cookie c=00000177 [p=0000016b fl=2 nc=0 na=1]
	[  +0.001028] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=0000000088505b0d
	[  +0.001149] FS-Cache: N-key=[8] '99823b0000000000'
	[  +0.273462] FS-Cache: Duplicate cookie detected
	[  +0.000786] FS-Cache: O-cookie c=00000171 [p=0000016b fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=00000000865130fc{9p.inode} n=00000000648a8f54
	[  +0.001088] FS-Cache: O-key=[8] 'a1823b0000000000'
	[  +0.000816] FS-Cache: N-cookie c=00000178 [p=0000016b fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=00000000ccc9761f
	[  +0.001097] FS-Cache: N-key=[8] 'a1823b0000000000'
	[May20 09:58] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[  +0.555832] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	
	
	==> etcd [afcbbf4b5b7a4d029496110c72a4c8a32cad61103a0181a929cacc31abf7a1e3] <==
	{"level":"info","ts":"2024-05-20T10:26:13.910357Z","caller":"traceutil/trace.go:171","msg":"trace[64789948] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"118.038539ms","start":"2024-05-20T10:26:13.792282Z","end":"2024-05-20T10:26:13.910321Z","steps":["trace[64789948] 'process raft request'  (duration: 14.304313ms)","trace[64789948] 'compare'  (duration: 41.838836ms)","trace[64789948] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/pods/kube-system/kube-proxy-mxn9s; req_size:3408; } (duration: 59.653118ms)"],"step_count":3}
	{"level":"info","ts":"2024-05-20T10:26:14.506258Z","caller":"traceutil/trace.go:171","msg":"trace[1369869291] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"174.868623ms","start":"2024-05-20T10:26:14.33137Z","end":"2024-05-20T10:26:14.506238Z","steps":["trace[1369869291] 'process raft request'  (duration: 106.190656ms)","trace[1369869291] 'compare'  (duration: 65.326321ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:26:15.148786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.294141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-mxn9s\" ","response":"range_response_count:1 size:3426"}
	{"level":"info","ts":"2024-05-20T10:26:15.156623Z","caller":"traceutil/trace.go:171","msg":"trace[448250679] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-mxn9s; range_end:; response_count:1; response_revision:389; }","duration":"130.110541ms","start":"2024-05-20T10:26:15.026465Z","end":"2024-05-20T10:26:15.156575Z","steps":["trace[448250679] 'agreement among raft nodes before linearized reading'  (duration: 22.882296ms)","trace[448250679] 'range keys from in-memory index tree'  (duration: 99.362057ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:26:15.21604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.338541ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128029299912726183 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3057 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-05-20T10:26:15.269628Z","caller":"traceutil/trace.go:171","msg":"trace[1872530540] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"240.839942ms","start":"2024-05-20T10:26:15.028701Z","end":"2024-05-20T10:26:15.269541Z","steps":["trace[1872530540] 'process raft request'  (duration: 41.943425ms)","trace[1872530540] 'store kv pair into bolt db' {req_type:put; key:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; req_size:3125; } (duration: 86.915681ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T10:26:15.291469Z","caller":"traceutil/trace.go:171","msg":"trace[489295856] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"241.994771ms","start":"2024-05-20T10:26:15.049459Z","end":"2024-05-20T10:26:15.291454Z","steps":["trace[489295856] 'process raft request'  (duration: 220.052262ms)","trace[489295856] 'compare'  (duration: 21.478881ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T10:26:15.297129Z","caller":"traceutil/trace.go:171","msg":"trace[199199671] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"247.603115ms","start":"2024-05-20T10:26:15.049507Z","end":"2024-05-20T10:26:15.29711Z","steps":["trace[199199671] 'process raft request'  (duration: 241.566056ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:15.297361Z","caller":"traceutil/trace.go:171","msg":"trace[2005051907] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"227.377695ms","start":"2024-05-20T10:26:15.069977Z","end":"2024-05-20T10:26:15.297354Z","steps":["trace[2005051907] 'process raft request'  (duration: 221.12711ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:15.297453Z","caller":"traceutil/trace.go:171","msg":"trace[1248373236] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"227.080364ms","start":"2024-05-20T10:26:15.070366Z","end":"2024-05-20T10:26:15.297447Z","steps":["trace[1248373236] 'process raft request'  (duration: 220.770753ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:15.297545Z","caller":"traceutil/trace.go:171","msg":"trace[1744556478] linearizableReadLoop","detail":"{readStateIndex:404; appliedIndex:400; }","duration":"227.169823ms","start":"2024-05-20T10:26:15.070351Z","end":"2024-05-20T10:26:15.297521Z","steps":["trace[1744556478] 'read index received'  (duration: 298.496µs)","trace[1744556478] 'applied index is now lower than readState.Index'  (duration: 226.870736ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:26:15.301846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.487331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T10:26:15.301975Z","caller":"traceutil/trace.go:171","msg":"trace[706956391] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:395; }","duration":"231.62679ms","start":"2024-05-20T10:26:15.070335Z","end":"2024-05-20T10:26:15.301962Z","steps":["trace[706956391] 'agreement among raft nodes before linearized reading'  (duration: 231.44866ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:16.172439Z","caller":"traceutil/trace.go:171","msg":"trace[479206389] linearizableReadLoop","detail":"{readStateIndex:481; appliedIndex:480; }","duration":"118.652392ms","start":"2024-05-20T10:26:16.053751Z","end":"2024-05-20T10:26:16.172403Z","steps":["trace[479206389] 'read index received'  (duration: 8.647463ms)","trace[479206389] 'applied index is now lower than readState.Index'  (duration: 109.738203ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T10:26:16.172649Z","caller":"traceutil/trace.go:171","msg":"trace[338265923] transaction","detail":"{read_only:false; response_revision:471; number_of_response:1; }","duration":"112.882404ms","start":"2024-05-20T10:26:16.05927Z","end":"2024-05-20T10:26:16.172152Z","steps":["trace[338265923] 'process raft request'  (duration: 91.466955ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:16.172796Z","caller":"traceutil/trace.go:171","msg":"trace[338464100] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"108.23194ms","start":"2024-05-20T10:26:16.064557Z","end":"2024-05-20T10:26:16.172789Z","steps":["trace[338464100] 'process raft request'  (duration: 86.251548ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:16.173089Z","caller":"traceutil/trace.go:171","msg":"trace[1676262612] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"103.616314ms","start":"2024-05-20T10:26:16.058731Z","end":"2024-05-20T10:26:16.162347Z","steps":["trace[1676262612] 'process raft request'  (duration: 18.139173ms)","trace[1676262612] 'attach lease to kv pair' {req_type:put; key:/registry/events/kube-system/metrics-server.17d12b835118af6a; req_size:704; } (duration: 73.765295ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:26:16.185145Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.013171ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-05-20T10:26:16.185246Z","caller":"traceutil/trace.go:171","msg":"trace[159244787] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:478; }","duration":"126.123323ms","start":"2024-05-20T10:26:16.059111Z","end":"2024-05-20T10:26:16.185234Z","steps":["trace[159244787] 'agreement among raft nodes before linearized reading'  (duration: 125.915147ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:26:16.186341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.449455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-05-20T10:26:16.173387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.624391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"warn","ts":"2024-05-20T10:26:16.1894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.237923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-05-20T10:26:16.189452Z","caller":"traceutil/trace.go:171","msg":"trace[414579672] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:478; }","duration":"130.298491ms","start":"2024-05-20T10:26:16.059144Z","end":"2024-05-20T10:26:16.189443Z","steps":["trace[414579672] 'agreement among raft nodes before linearized reading'  (duration: 130.154002ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:16.18943Z","caller":"traceutil/trace.go:171","msg":"trace[2120978683] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:473; }","duration":"135.671573ms","start":"2024-05-20T10:26:16.053744Z","end":"2024-05-20T10:26:16.189416Z","steps":["trace[2120978683] 'agreement among raft nodes before linearized reading'  (duration: 118.396356ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:26:16.190482Z","caller":"traceutil/trace.go:171","msg":"trace[907544207] range","detail":"{range_begin:/registry/services/specs/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:478; }","duration":"124.592895ms","start":"2024-05-20T10:26:16.065878Z","end":"2024-05-20T10:26:16.190471Z","steps":["trace[907544207] 'agreement among raft nodes before linearized reading'  (duration: 120.439864ms)"],"step_count":1}
	
	
	==> gcp-auth [48a7b240a25082dbb7e9990a41a8900ff0b7b63cd49dc0701beccbb9ee525c07] <==
	2024/05/20 10:27:33 GCP Auth Webhook started!
	2024/05/20 10:29:02 Ready to marshal response ...
	2024/05/20 10:29:02 Ready to write response ...
	2024/05/20 10:29:02 Ready to marshal response ...
	2024/05/20 10:29:02 Ready to write response ...
	2024/05/20 10:29:02 Ready to marshal response ...
	2024/05/20 10:29:02 Ready to write response ...
	2024/05/20 10:29:12 Ready to marshal response ...
	2024/05/20 10:29:12 Ready to write response ...
	2024/05/20 10:29:17 Ready to marshal response ...
	2024/05/20 10:29:17 Ready to write response ...
	2024/05/20 10:29:17 Ready to marshal response ...
	2024/05/20 10:29:17 Ready to write response ...
	2024/05/20 10:29:25 Ready to marshal response ...
	2024/05/20 10:29:25 Ready to write response ...
	2024/05/20 10:29:36 Ready to marshal response ...
	2024/05/20 10:29:36 Ready to write response ...
	2024/05/20 10:30:08 Ready to marshal response ...
	2024/05/20 10:30:08 Ready to write response ...
	2024/05/20 10:30:24 Ready to marshal response ...
	2024/05/20 10:30:24 Ready to write response ...
	2024/05/20 10:32:45 Ready to marshal response ...
	2024/05/20 10:32:45 Ready to write response ...
	
	
	==> kernel <==
	 10:35:30 up 1 day, 18:17,  0 users,  load average: 0.43, 0.81, 1.74
	Linux addons-091599 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [eacd599cd704cbe9b437490daaa4fce5992d26d39752b9013ce687071c4f97da] <==
	I0520 10:33:26.589862       1 main.go:227] handling current node
	I0520 10:33:36.609775       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:33:36.609803       1 main.go:227] handling current node
	I0520 10:33:46.621308       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:33:46.621337       1 main.go:227] handling current node
	I0520 10:33:56.634210       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:33:56.634247       1 main.go:227] handling current node
	I0520 10:34:06.644303       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:34:06.644331       1 main.go:227] handling current node
	I0520 10:34:16.648756       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:34:16.648784       1 main.go:227] handling current node
	I0520 10:34:26.661617       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:34:26.661676       1 main.go:227] handling current node
	I0520 10:34:36.672749       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:34:36.672775       1 main.go:227] handling current node
	I0520 10:34:46.676712       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:34:46.676741       1 main.go:227] handling current node
	I0520 10:34:56.688774       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:34:56.688806       1 main.go:227] handling current node
	I0520 10:35:06.695230       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:35:06.695258       1 main.go:227] handling current node
	I0520 10:35:16.699326       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:35:16.699355       1 main.go:227] handling current node
	I0520 10:35:26.706236       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0520 10:35:26.706271       1 main.go:227] handling current node
	
	
	==> kube-apiserver [733d7717e335edae9ef26dee586bcad5bcd2646eaf6e58f26ebf7fabce5cfe0b] <==
	E0520 10:28:27.141538       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.204.129:443: connect: connection refused
	E0520 10:28:27.151253       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.204.129:443: connect: connection refused
	E0520 10:28:27.175742       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.204.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.204.129:443: connect: connection refused
	I0520 10:28:27.280565       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0520 10:29:02.089328       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.6.185"}
	E0520 10:29:41.191006       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0520 10:29:47.856267       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0520 10:30:15.045809       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0520 10:30:16.084023       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0520 10:30:20.321327       1 watch.go:250] http2: stream closed
	I0520 10:30:23.364467       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:30:23.364598       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:30:23.395213       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:30:23.395278       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:30:23.438027       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:30:23.438078       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:30:23.467459       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:30:23.467502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:30:24.066968       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0520 10:30:24.365203       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.46.171"}
	W0520 10:30:24.440350       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0520 10:30:24.468445       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0520 10:30:24.501291       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0520 10:32:45.632340       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.254.96"}
	E0520 10:33:02.158270       1 watch.go:250] http2: stream closed
	
	
	==> kube-controller-manager [417ff803308798fc46c3f330b9797c3df708e85cc3ec2df0bc77bcf7b76e1a77] <==
	W0520 10:33:47.352635       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:33:47.352674       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:33:58.516273       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:33:58.516323       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:34:13.689345       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:34:13.689384       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:34:14.970253       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:34:14.970290       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 10:34:21.317114       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="45.685µs"
	W0520 10:34:34.818521       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:34:34.818563       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 10:34:34.994300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="62.867µs"
	W0520 10:34:48.094430       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:34:48.094467       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:34:49.539206       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:34:49.539328       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:35:02.521879       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:35:02.521920       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:35:14.575778       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:35:14.575826       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 10:35:28.707151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="4.415µs"
	W0520 10:35:29.295122       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:35:29.295240       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:35:30.391785       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:35:30.391822       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [8c5f80237ca50ab392b7771b09f506b91c1c8ee694fc3a92dbdad76efe65df65] <==
	I0520 10:26:17.239321       1 server_linux.go:69] "Using iptables proxy"
	I0520 10:26:17.300199       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0520 10:26:17.593238       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0520 10:26:17.593365       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:26:17.598117       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0520 10:26:17.598330       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0520 10:26:17.598411       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:26:17.598661       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:26:17.598882       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:26:17.599801       1 config.go:192] "Starting service config controller"
	I0520 10:26:17.599861       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:26:17.599912       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:26:17.599940       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:26:17.600441       1 config.go:319] "Starting node config controller"
	I0520 10:26:17.600495       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:26:17.700869       1 shared_informer.go:320] Caches are synced for node config
	I0520 10:26:17.700995       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:26:17.701024       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cd0f27c7474438ae959bd0773f4a28e8fb2987ab8c0924b5ebf24f7f9d9838ea] <==
	W0520 10:25:55.962558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 10:25:55.962573       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 10:25:55.962615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:25:55.962632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:25:55.962671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 10:25:55.962686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 10:25:55.962894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:25:55.962912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:25:55.962947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 10:25:55.962962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 10:25:55.962999       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:25:55.963014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:25:55.963051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 10:25:55.963066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 10:25:55.963107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 10:25:55.963122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:25:55.963160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 10:25:55.963174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 10:25:55.963209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 10:25:55.963222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 10:25:55.963304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 10:25:55.963430       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:25:55.963448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:25:55.963868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0520 10:25:57.354082       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 10:34:07 addons-091599 kubelet[1492]: E0520 10:34:07.981533    1492 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-6r6ll_default(b5b40db0-7b53-4d4f-821b-1b114daab242)\"" pod="default/hello-world-app-86c47465fc-6r6ll" podUID="b5b40db0-7b53-4d4f-821b-1b114daab242"
	May 20 10:34:19 addons-091599 kubelet[1492]: I0520 10:34:19.981017    1492 scope.go:117] "RemoveContainer" containerID="5a6874f91b07d22841ac9b0ed42bc6b79d744aa7968be5535ead8a4e79979a0b"
	May 20 10:34:20 addons-091599 kubelet[1492]: I0520 10:34:20.300274    1492 scope.go:117] "RemoveContainer" containerID="5a6874f91b07d22841ac9b0ed42bc6b79d744aa7968be5535ead8a4e79979a0b"
	May 20 10:34:21 addons-091599 kubelet[1492]: I0520 10:34:21.303837    1492 scope.go:117] "RemoveContainer" containerID="47c1c0076f011763a287ba5818c293efc59feb247c5a799ae8cf917d618ceb6e"
	May 20 10:34:21 addons-091599 kubelet[1492]: E0520 10:34:21.304121    1492 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-6r6ll_default(b5b40db0-7b53-4d4f-821b-1b114daab242)\"" pod="default/hello-world-app-86c47465fc-6r6ll" podUID="b5b40db0-7b53-4d4f-821b-1b114daab242"
	May 20 10:34:34 addons-091599 kubelet[1492]: I0520 10:34:34.981763    1492 scope.go:117] "RemoveContainer" containerID="47c1c0076f011763a287ba5818c293efc59feb247c5a799ae8cf917d618ceb6e"
	May 20 10:34:34 addons-091599 kubelet[1492]: E0520 10:34:34.982053    1492 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-6r6ll_default(b5b40db0-7b53-4d4f-821b-1b114daab242)\"" pod="default/hello-world-app-86c47465fc-6r6ll" podUID="b5b40db0-7b53-4d4f-821b-1b114daab242"
	May 20 10:34:47 addons-091599 kubelet[1492]: I0520 10:34:47.981532    1492 scope.go:117] "RemoveContainer" containerID="47c1c0076f011763a287ba5818c293efc59feb247c5a799ae8cf917d618ceb6e"
	May 20 10:34:47 addons-091599 kubelet[1492]: E0520 10:34:47.982533    1492 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-6r6ll_default(b5b40db0-7b53-4d4f-821b-1b114daab242)\"" pod="default/hello-world-app-86c47465fc-6r6ll" podUID="b5b40db0-7b53-4d4f-821b-1b114daab242"
	May 20 10:35:02 addons-091599 kubelet[1492]: I0520 10:35:02.981582    1492 scope.go:117] "RemoveContainer" containerID="47c1c0076f011763a287ba5818c293efc59feb247c5a799ae8cf917d618ceb6e"
	May 20 10:35:02 addons-091599 kubelet[1492]: E0520 10:35:02.981937    1492 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-6r6ll_default(b5b40db0-7b53-4d4f-821b-1b114daab242)\"" pod="default/hello-world-app-86c47465fc-6r6ll" podUID="b5b40db0-7b53-4d4f-821b-1b114daab242"
	May 20 10:35:13 addons-091599 kubelet[1492]: I0520 10:35:13.982049    1492 scope.go:117] "RemoveContainer" containerID="47c1c0076f011763a287ba5818c293efc59feb247c5a799ae8cf917d618ceb6e"
	May 20 10:35:13 addons-091599 kubelet[1492]: E0520 10:35:13.982391    1492 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-6r6ll_default(b5b40db0-7b53-4d4f-821b-1b114daab242)\"" pod="default/hello-world-app-86c47465fc-6r6ll" podUID="b5b40db0-7b53-4d4f-821b-1b114daab242"
	May 20 10:35:24 addons-091599 kubelet[1492]: I0520 10:35:24.981327    1492 scope.go:117] "RemoveContainer" containerID="47c1c0076f011763a287ba5818c293efc59feb247c5a799ae8cf917d618ceb6e"
	May 20 10:35:24 addons-091599 kubelet[1492]: E0520 10:35:24.981635    1492 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-6r6ll_default(b5b40db0-7b53-4d4f-821b-1b114daab242)\"" pod="default/hello-world-app-86c47465fc-6r6ll" podUID="b5b40db0-7b53-4d4f-821b-1b114daab242"
	May 20 10:35:30 addons-091599 kubelet[1492]: I0520 10:35:30.002398    1492 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b05bfa4c-b71e-4ba3-82ec-ef3604433ba9-tmp-dir\") pod \"b05bfa4c-b71e-4ba3-82ec-ef3604433ba9\" (UID: \"b05bfa4c-b71e-4ba3-82ec-ef3604433ba9\") "
	May 20 10:35:30 addons-091599 kubelet[1492]: I0520 10:35:30.002481    1492 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvzgh\" (UniqueName: \"kubernetes.io/projected/b05bfa4c-b71e-4ba3-82ec-ef3604433ba9-kube-api-access-fvzgh\") pod \"b05bfa4c-b71e-4ba3-82ec-ef3604433ba9\" (UID: \"b05bfa4c-b71e-4ba3-82ec-ef3604433ba9\") "
	May 20 10:35:30 addons-091599 kubelet[1492]: I0520 10:35:30.003213    1492 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05bfa4c-b71e-4ba3-82ec-ef3604433ba9-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "b05bfa4c-b71e-4ba3-82ec-ef3604433ba9" (UID: "b05bfa4c-b71e-4ba3-82ec-ef3604433ba9"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	May 20 10:35:30 addons-091599 kubelet[1492]: I0520 10:35:30.025790    1492 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05bfa4c-b71e-4ba3-82ec-ef3604433ba9-kube-api-access-fvzgh" (OuterVolumeSpecName: "kube-api-access-fvzgh") pod "b05bfa4c-b71e-4ba3-82ec-ef3604433ba9" (UID: "b05bfa4c-b71e-4ba3-82ec-ef3604433ba9"). InnerVolumeSpecName "kube-api-access-fvzgh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 10:35:30 addons-091599 kubelet[1492]: I0520 10:35:30.103247    1492 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b05bfa4c-b71e-4ba3-82ec-ef3604433ba9-tmp-dir\") on node \"addons-091599\" DevicePath \"\""
	May 20 10:35:30 addons-091599 kubelet[1492]: I0520 10:35:30.103290    1492 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fvzgh\" (UniqueName: \"kubernetes.io/projected/b05bfa4c-b71e-4ba3-82ec-ef3604433ba9-kube-api-access-fvzgh\") on node \"addons-091599\" DevicePath \"\""
	May 20 10:35:30 addons-091599 kubelet[1492]: I0520 10:35:30.432183    1492 scope.go:117] "RemoveContainer" containerID="02f9a1a4f7601a1e52b81b2726492153fb6869934090af5ec45db1d64aa54e9f"
	May 20 10:35:30 addons-091599 kubelet[1492]: I0520 10:35:30.471225    1492 scope.go:117] "RemoveContainer" containerID="02f9a1a4f7601a1e52b81b2726492153fb6869934090af5ec45db1d64aa54e9f"
	May 20 10:35:30 addons-091599 kubelet[1492]: E0520 10:35:30.471608    1492 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f9a1a4f7601a1e52b81b2726492153fb6869934090af5ec45db1d64aa54e9f\": container with ID starting with 02f9a1a4f7601a1e52b81b2726492153fb6869934090af5ec45db1d64aa54e9f not found: ID does not exist" containerID="02f9a1a4f7601a1e52b81b2726492153fb6869934090af5ec45db1d64aa54e9f"
	May 20 10:35:30 addons-091599 kubelet[1492]: I0520 10:35:30.471664    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f9a1a4f7601a1e52b81b2726492153fb6869934090af5ec45db1d64aa54e9f"} err="failed to get container status \"02f9a1a4f7601a1e52b81b2726492153fb6869934090af5ec45db1d64aa54e9f\": rpc error: code = NotFound desc = could not find container \"02f9a1a4f7601a1e52b81b2726492153fb6869934090af5ec45db1d64aa54e9f\": container with ID starting with 02f9a1a4f7601a1e52b81b2726492153fb6869934090af5ec45db1d64aa54e9f not found: ID does not exist"
	
	
	==> storage-provisioner [0245d6608194b64eee2101b06e4cbfc8ab143d324f261c3b80742e761338c8fc] <==
	I0520 10:26:47.138665       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 10:26:47.190627       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 10:26:47.190825       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 10:26:47.389802       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 10:26:47.428791       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"519ef862-bb4e-4780-b3e2-115d6332d3ad", APIVersion:"v1", ResourceVersion:"931", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-091599_5b22d81e-75c6-49c9-b440-170c8ff90cf1 became leader
	I0520 10:26:47.433329       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-091599_5b22d81e-75c6-49c9-b440-170c8ff90cf1!
	I0520 10:26:47.533991       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-091599_5b22d81e-75c6-49c9-b440-170c8ff90cf1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-091599 -n addons-091599
helpers_test.go:261: (dbg) Run:  kubectl --context addons-091599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (311.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (380.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-776336 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-776336 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 102 (6m17.359824919s)

                                                
                                                
-- stdout --
	* [old-k8s-version-776336] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-776336" primary control-plane node in "old-k8s-version-776336" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "old-k8s-version-776336" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-776336 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:20:59.501776 1657011 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:20:59.502001 1657011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:20:59.502043 1657011 out.go:304] Setting ErrFile to fd 2...
	I0520 11:20:59.502065 1657011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:20:59.502376 1657011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 11:20:59.502876 1657011 out.go:298] Setting JSON to false
	I0520 11:20:59.504019 1657011 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":155007,"bootTime":1716049053,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0520 11:20:59.504139 1657011 start.go:139] virtualization:  
	I0520 11:20:59.511466 1657011 out.go:177] * [old-k8s-version-776336] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 11:20:59.514053 1657011 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:20:59.515856 1657011 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:20:59.514205 1657011 notify.go:220] Checking for updates...
	I0520 11:20:59.519741 1657011 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 11:20:59.521809 1657011 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	I0520 11:20:59.523553 1657011 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 11:20:59.525236 1657011 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:20:59.527328 1657011 config.go:182] Loaded profile config "old-k8s-version-776336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:20:59.529400 1657011 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 11:20:59.530917 1657011 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:20:59.558982 1657011 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 11:20:59.559115 1657011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 11:20:59.687958 1657011 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:67 SystemTime:2024-05-20 11:20:59.666769161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 11:20:59.688076 1657011 docker.go:295] overlay module found
	I0520 11:20:59.689869 1657011 out.go:177] * Using the docker driver based on existing profile
	I0520 11:20:59.691321 1657011 start.go:297] selected driver: docker
	I0520 11:20:59.691337 1657011 start.go:901] validating driver "docker" against &{Name:old-k8s-version-776336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-776336 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:20:59.691456 1657011 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:20:59.692074 1657011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 11:20:59.805785 1657011 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:67 SystemTime:2024-05-20 11:20:59.793744162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 11:20:59.806160 1657011 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:20:59.806188 1657011 cni.go:84] Creating CNI manager for ""
	I0520 11:20:59.806196 1657011 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 11:20:59.806242 1657011 start.go:340] cluster config:
	{Name:old-k8s-version-776336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-776336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:20:59.808237 1657011 out.go:177] * Starting "old-k8s-version-776336" primary control-plane node in "old-k8s-version-776336" cluster
	I0520 11:20:59.809888 1657011 cache.go:121] Beginning downloading kic base image for docker with crio
	I0520 11:20:59.811449 1657011 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0520 11:20:59.812791 1657011 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:20:59.812849 1657011 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0520 11:20:59.812874 1657011 cache.go:56] Caching tarball of preloaded images
	I0520 11:20:59.812963 1657011 preload.go:173] Found /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0520 11:20:59.812978 1657011 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0520 11:20:59.813077 1657011 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/config.json ...
	I0520 11:20:59.813296 1657011 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0520 11:20:59.841322 1657011 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0520 11:20:59.841350 1657011 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0520 11:20:59.841370 1657011 cache.go:194] Successfully downloaded all kic artifacts
	I0520 11:20:59.841401 1657011 start.go:360] acquireMachinesLock for old-k8s-version-776336: {Name:mk61f513d62fdb2fc362410fc475e1f2742de7cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:20:59.841469 1657011 start.go:364] duration metric: took 43.133µs to acquireMachinesLock for "old-k8s-version-776336"
	I0520 11:20:59.841498 1657011 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:20:59.841508 1657011 fix.go:54] fixHost starting: 
	I0520 11:20:59.841803 1657011 cli_runner.go:164] Run: docker container inspect old-k8s-version-776336 --format={{.State.Status}}
	I0520 11:20:59.865911 1657011 fix.go:112] recreateIfNeeded on old-k8s-version-776336: state=Stopped err=<nil>
	W0520 11:20:59.865944 1657011 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:20:59.867882 1657011 out.go:177] * Restarting existing docker container for "old-k8s-version-776336" ...
	I0520 11:20:59.869501 1657011 cli_runner.go:164] Run: docker start old-k8s-version-776336
	I0520 11:21:00.399644 1657011 cli_runner.go:164] Run: docker container inspect old-k8s-version-776336 --format={{.State.Status}}
	I0520 11:21:00.427192 1657011 kic.go:430] container "old-k8s-version-776336" state is running.
	I0520 11:21:00.427625 1657011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-776336
	I0520 11:21:00.458797 1657011 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/config.json ...
	I0520 11:21:00.459043 1657011 machine.go:94] provisionDockerMachine start ...
	I0520 11:21:00.459171 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:00.489435 1657011 main.go:141] libmachine: Using SSH client type: native
	I0520 11:21:00.489738 1657011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40787 <nil> <nil>}
	I0520 11:21:00.489749 1657011 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:21:00.490364 1657011 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41200->127.0.0.1:40787: read: connection reset by peer
	I0520 11:21:03.633375 1657011 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-776336
	
	I0520 11:21:03.633441 1657011 ubuntu.go:169] provisioning hostname "old-k8s-version-776336"
	I0520 11:21:03.633537 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:03.651274 1657011 main.go:141] libmachine: Using SSH client type: native
	I0520 11:21:03.651521 1657011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40787 <nil> <nil>}
	I0520 11:21:03.651531 1657011 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-776336 && echo "old-k8s-version-776336" | sudo tee /etc/hostname
	I0520 11:21:03.799420 1657011 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-776336
	
	I0520 11:21:03.799566 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:03.819376 1657011 main.go:141] libmachine: Using SSH client type: native
	I0520 11:21:03.819622 1657011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40787 <nil> <nil>}
	I0520 11:21:03.819639 1657011 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-776336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-776336/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-776336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:21:03.954278 1657011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:21:03.954304 1657011 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18925-1463640/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-1463640/.minikube}
	I0520 11:21:03.954344 1657011 ubuntu.go:177] setting up certificates
	I0520 11:21:03.954354 1657011 provision.go:84] configureAuth start
	I0520 11:21:03.954435 1657011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-776336
	I0520 11:21:03.973957 1657011 provision.go:143] copyHostCerts
	I0520 11:21:03.974024 1657011 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-1463640/.minikube/key.pem, removing ...
	I0520 11:21:03.974043 1657011 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-1463640/.minikube/key.pem
	I0520 11:21:03.974118 1657011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/key.pem (1679 bytes)
	I0520 11:21:03.974224 1657011 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.pem, removing ...
	I0520 11:21:03.974235 1657011 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.pem
	I0520 11:21:03.974263 1657011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.pem (1082 bytes)
	I0520 11:21:03.974323 1657011 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-1463640/.minikube/cert.pem, removing ...
	I0520 11:21:03.974328 1657011 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-1463640/.minikube/cert.pem
	I0520 11:21:03.974352 1657011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/cert.pem (1123 bytes)
	I0520 11:21:03.974406 1657011 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-776336 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-776336]
	I0520 11:21:04.671150 1657011 provision.go:177] copyRemoteCerts
	I0520 11:21:04.671307 1657011 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:21:04.671376 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:04.688503 1657011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40787 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/old-k8s-version-776336/id_rsa Username:docker}
	I0520 11:21:04.785006 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:21:04.819844 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 11:21:04.852995 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:21:04.880157 1657011 provision.go:87] duration metric: took 925.781994ms to configureAuth
	I0520 11:21:04.880188 1657011 ubuntu.go:193] setting minikube options for container-runtime
	I0520 11:21:04.880386 1657011 config.go:182] Loaded profile config "old-k8s-version-776336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:21:04.880509 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:04.900477 1657011 main.go:141] libmachine: Using SSH client type: native
	I0520 11:21:04.900743 1657011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40787 <nil> <nil>}
	I0520 11:21:04.900762 1657011 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:21:05.357367 1657011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:21:05.357450 1657011 machine.go:97] duration metric: took 4.89839625s to provisionDockerMachine
	I0520 11:21:05.357476 1657011 start.go:293] postStartSetup for "old-k8s-version-776336" (driver="docker")
	I0520 11:21:05.357516 1657011 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:21:05.357598 1657011 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:21:05.357695 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:05.382575 1657011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40787 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/old-k8s-version-776336/id_rsa Username:docker}
	I0520 11:21:05.479282 1657011 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:21:05.482585 1657011 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0520 11:21:05.482625 1657011 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0520 11:21:05.482636 1657011 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0520 11:21:05.482644 1657011 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0520 11:21:05.482655 1657011 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-1463640/.minikube/addons for local assets ...
	I0520 11:21:05.482718 1657011 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-1463640/.minikube/files for local assets ...
	I0520 11:21:05.482809 1657011 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-1463640/.minikube/files/etc/ssl/certs/14690782.pem -> 14690782.pem in /etc/ssl/certs
	I0520 11:21:05.482921 1657011 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:21:05.492684 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/files/etc/ssl/certs/14690782.pem --> /etc/ssl/certs/14690782.pem (1708 bytes)
	I0520 11:21:05.520297 1657011 start.go:296] duration metric: took 162.778335ms for postStartSetup
	I0520 11:21:05.520379 1657011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 11:21:05.520435 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:05.541391 1657011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40787 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/old-k8s-version-776336/id_rsa Username:docker}
	I0520 11:21:05.631280 1657011 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0520 11:21:05.636912 1657011 fix.go:56] duration metric: took 5.795394558s for fixHost
	I0520 11:21:05.636941 1657011 start.go:83] releasing machines lock for "old-k8s-version-776336", held for 5.795457408s
	I0520 11:21:05.637025 1657011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-776336
	I0520 11:21:05.656581 1657011 ssh_runner.go:195] Run: cat /version.json
	I0520 11:21:05.656614 1657011 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:21:05.656640 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:05.656704 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:05.695045 1657011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40787 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/old-k8s-version-776336/id_rsa Username:docker}
	I0520 11:21:05.702675 1657011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40787 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/old-k8s-version-776336/id_rsa Username:docker}
	I0520 11:21:05.797610 1657011 ssh_runner.go:195] Run: systemctl --version
	I0520 11:21:05.927879 1657011 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:21:06.085285 1657011 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 11:21:06.091325 1657011 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:21:06.102209 1657011 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0520 11:21:06.102353 1657011 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:21:06.112655 1657011 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 11:21:06.112723 1657011 start.go:494] detecting cgroup driver to use...
	I0520 11:21:06.112782 1657011 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0520 11:21:06.112849 1657011 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:21:06.129063 1657011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:21:06.142765 1657011 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:21:06.142914 1657011 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:21:06.160946 1657011 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:21:06.178041 1657011 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:21:06.285244 1657011 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:21:06.396250 1657011 docker.go:233] disabling docker service ...
	I0520 11:21:06.396398 1657011 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:21:06.415889 1657011 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:21:06.428579 1657011 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:21:06.520055 1657011 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:21:06.650626 1657011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:21:06.664246 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:21:06.689910 1657011 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:21:06.689993 1657011 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:21:06.700749 1657011 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:21:06.700888 1657011 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:21:06.710732 1657011 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:21:06.722785 1657011 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:21:06.734897 1657011 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:21:06.744826 1657011 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:21:06.754423 1657011 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:21:06.764403 1657011 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:21:06.863732 1657011 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:21:07.035589 1657011 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:21:07.035680 1657011 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:21:07.040886 1657011 start.go:562] Will wait 60s for crictl version
	I0520 11:21:07.040959 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:21:07.045046 1657011 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:21:07.084778 1657011 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0520 11:21:07.084870 1657011 ssh_runner.go:195] Run: crio --version
	I0520 11:21:07.133380 1657011 ssh_runner.go:195] Run: crio --version
	I0520 11:21:07.192987 1657011 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0520 11:21:07.194489 1657011 cli_runner.go:164] Run: docker network inspect old-k8s-version-776336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0520 11:21:07.210305 1657011 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0520 11:21:07.213980 1657011 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:21:07.225326 1657011 kubeadm.go:877] updating cluster {Name:old-k8s-version-776336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-776336 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:21:07.225463 1657011 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:21:07.225522 1657011 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:21:07.280525 1657011 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:21:07.280547 1657011 crio.go:433] Images already preloaded, skipping extraction
	I0520 11:21:07.280609 1657011 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:21:07.331602 1657011 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:21:07.331676 1657011 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:21:07.331702 1657011 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 crio true true} ...
	I0520 11:21:07.331835 1657011 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-776336 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-776336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:21:07.331950 1657011 ssh_runner.go:195] Run: crio config
	I0520 11:21:07.410949 1657011 cni.go:84] Creating CNI manager for ""
	I0520 11:21:07.410983 1657011 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 11:21:07.411011 1657011 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:21:07.411042 1657011 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-776336 NodeName:old-k8s-version-776336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:21:07.411258 1657011 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-776336"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:21:07.411359 1657011 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:21:07.421969 1657011 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:21:07.422100 1657011 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:21:07.432274 1657011 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0520 11:21:07.455808 1657011 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:21:07.478739 1657011 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0520 11:21:07.500505 1657011 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0520 11:21:07.504330 1657011 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:21:07.517902 1657011 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:21:07.648696 1657011 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:21:07.665058 1657011 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336 for IP: 192.168.76.2
	I0520 11:21:07.665081 1657011 certs.go:194] generating shared ca certs ...
	I0520 11:21:07.665099 1657011 certs.go:226] acquiring lock for ca certs: {Name:mke113fbac30e255083f63bab9dafb629ead7667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:21:07.665286 1657011 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key
	I0520 11:21:07.665354 1657011 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key
	I0520 11:21:07.665369 1657011 certs.go:256] generating profile certs ...
	I0520 11:21:07.665492 1657011 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.key
	I0520 11:21:07.665593 1657011 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/apiserver.key.c4a6d18b
	I0520 11:21:07.665691 1657011 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/proxy-client.key
	I0520 11:21:07.665854 1657011 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/1469078.pem (1338 bytes)
	W0520 11:21:07.665915 1657011 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/1469078_empty.pem, impossibly tiny 0 bytes
	I0520 11:21:07.665930 1657011 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 11:21:07.665982 1657011 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem (1082 bytes)
	I0520 11:21:07.666040 1657011 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:21:07.666074 1657011 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem (1679 bytes)
	I0520 11:21:07.666150 1657011 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/files/etc/ssl/certs/14690782.pem (1708 bytes)
	I0520 11:21:07.666889 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:21:07.702833 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:21:07.811701 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:21:07.910943 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 11:21:07.946026 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:21:07.987649 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 11:21:08.025634 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:21:08.056884 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:21:08.088469 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/1469078.pem --> /usr/share/ca-certificates/1469078.pem (1338 bytes)
	I0520 11:21:08.120919 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/files/etc/ssl/certs/14690782.pem --> /usr/share/ca-certificates/14690782.pem (1708 bytes)
	I0520 11:21:08.147334 1657011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:21:08.175553 1657011 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:21:08.194661 1657011 ssh_runner.go:195] Run: openssl version
	I0520 11:21:08.200856 1657011 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:21:08.211372 1657011 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:21:08.215048 1657011 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:21:08.215161 1657011 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:21:08.222285 1657011 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:21:08.231733 1657011 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1469078.pem && ln -fs /usr/share/ca-certificates/1469078.pem /etc/ssl/certs/1469078.pem"
	I0520 11:21:08.241740 1657011 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1469078.pem
	I0520 11:21:08.245279 1657011 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:36 /usr/share/ca-certificates/1469078.pem
	I0520 11:21:08.245388 1657011 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1469078.pem
	I0520 11:21:08.253320 1657011 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1469078.pem /etc/ssl/certs/51391683.0"
	I0520 11:21:08.263697 1657011 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14690782.pem && ln -fs /usr/share/ca-certificates/14690782.pem /etc/ssl/certs/14690782.pem"
	I0520 11:21:08.273797 1657011 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14690782.pem
	I0520 11:21:08.277321 1657011 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:36 /usr/share/ca-certificates/14690782.pem
	I0520 11:21:08.277408 1657011 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14690782.pem
	I0520 11:21:08.284978 1657011 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14690782.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:21:08.294599 1657011 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:21:08.299134 1657011 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:21:08.310310 1657011 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:21:08.324821 1657011 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:21:08.332719 1657011 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:21:08.340916 1657011 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:21:08.349249 1657011 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:21:08.356755 1657011 kubeadm.go:391] StartCluster: {Name:old-k8s-version-776336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-776336 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:21:08.356864 1657011 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:21:08.356942 1657011 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:21:08.412532 1657011 cri.go:89] found id: ""
	I0520 11:21:08.412643 1657011 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:21:08.422546 1657011 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:21:08.422616 1657011 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:21:08.422635 1657011 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:21:08.422724 1657011 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:21:08.432529 1657011 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:21:08.433031 1657011 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-776336" does not appear in /home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 11:21:08.433204 1657011 kubeconfig.go:62] /home/jenkins/minikube-integration/18925-1463640/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-776336" cluster setting kubeconfig missing "old-k8s-version-776336" context setting]
	I0520 11:21:08.433541 1657011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/kubeconfig: {Name:mk86e76ecc665bde4f67c226ceb67716f06a54d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:21:08.435267 1657011 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:21:08.445111 1657011 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0520 11:21:08.445184 1657011 kubeadm.go:591] duration metric: took 22.529369ms to restartPrimaryControlPlane
	I0520 11:21:08.445210 1657011 kubeadm.go:393] duration metric: took 88.47873ms to StartCluster
	I0520 11:21:08.445254 1657011 settings.go:142] acquiring lock: {Name:mkcb442de9baf8dd2fb339ccf162868e80429e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:21:08.445332 1657011 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 11:21:08.446144 1657011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/kubeconfig: {Name:mk86e76ecc665bde4f67c226ceb67716f06a54d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:21:08.446433 1657011 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:21:08.448798 1657011 out.go:177] * Verifying Kubernetes components...
	I0520 11:21:08.446858 1657011 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:21:08.447012 1657011 config.go:182] Loaded profile config "old-k8s-version-776336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:21:08.450795 1657011 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:21:08.448963 1657011 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-776336"
	I0520 11:21:08.451022 1657011 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-776336"
	W0520 11:21:08.451052 1657011 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:21:08.451103 1657011 host.go:66] Checking if "old-k8s-version-776336" exists ...
	I0520 11:21:08.451627 1657011 cli_runner.go:164] Run: docker container inspect old-k8s-version-776336 --format={{.State.Status}}
	I0520 11:21:08.448975 1657011 addons.go:69] Setting dashboard=true in profile "old-k8s-version-776336"
	I0520 11:21:08.451791 1657011 addons.go:234] Setting addon dashboard=true in "old-k8s-version-776336"
	W0520 11:21:08.451857 1657011 addons.go:243] addon dashboard should already be in state true
	I0520 11:21:08.451906 1657011 host.go:66] Checking if "old-k8s-version-776336" exists ...
	I0520 11:21:08.452350 1657011 cli_runner.go:164] Run: docker container inspect old-k8s-version-776336 --format={{.State.Status}}
	I0520 11:21:08.448981 1657011 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-776336"
	I0520 11:21:08.455982 1657011 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-776336"
	I0520 11:21:08.448988 1657011 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-776336"
	I0520 11:21:08.456065 1657011 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-776336"
	W0520 11:21:08.456075 1657011 addons.go:243] addon metrics-server should already be in state true
	I0520 11:21:08.456103 1657011 host.go:66] Checking if "old-k8s-version-776336" exists ...
	I0520 11:21:08.456285 1657011 cli_runner.go:164] Run: docker container inspect old-k8s-version-776336 --format={{.State.Status}}
	I0520 11:21:08.456523 1657011 cli_runner.go:164] Run: docker container inspect old-k8s-version-776336 --format={{.State.Status}}
	I0520 11:21:08.503611 1657011 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0520 11:21:08.505494 1657011 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0520 11:21:08.512481 1657011 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0520 11:21:08.512510 1657011 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0520 11:21:08.512581 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:08.521732 1657011 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:21:08.529723 1657011 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:21:08.533728 1657011 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:21:08.529910 1657011 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:21:08.532325 1657011 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-776336"
	W0520 11:21:08.533766 1657011 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:21:08.533774 1657011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:21:08.533798 1657011 host.go:66] Checking if "old-k8s-version-776336" exists ...
	I0520 11:21:08.533847 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:08.534211 1657011 cli_runner.go:164] Run: docker container inspect old-k8s-version-776336 --format={{.State.Status}}
	I0520 11:21:08.533762 1657011 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:21:08.534816 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:08.561445 1657011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40787 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/old-k8s-version-776336/id_rsa Username:docker}
	I0520 11:21:08.588085 1657011 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:21:08.588113 1657011 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:21:08.588187 1657011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-776336
	I0520 11:21:08.602068 1657011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40787 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/old-k8s-version-776336/id_rsa Username:docker}
	I0520 11:21:08.613850 1657011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40787 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/old-k8s-version-776336/id_rsa Username:docker}
	I0520 11:21:08.631581 1657011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40787 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/old-k8s-version-776336/id_rsa Username:docker}
	I0520 11:21:08.739082 1657011 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:21:08.764891 1657011 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-776336" to be "Ready" ...
	I0520 11:21:08.800477 1657011 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0520 11:21:08.800568 1657011 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0520 11:21:08.843461 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:21:08.851334 1657011 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0520 11:21:08.851409 1657011 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0520 11:21:08.917751 1657011 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:21:08.917831 1657011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:21:08.937821 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:21:08.960190 1657011 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0520 11:21:08.960279 1657011 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0520 11:21:09.035872 1657011 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:21:09.035953 1657011 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:21:09.037756 1657011 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0520 11:21:09.037844 1657011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0520 11:21:09.148097 1657011 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:21:09.148174 1657011 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:21:09.159370 1657011 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0520 11:21:09.159440 1657011 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0520 11:21:09.199518 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.199650 1657011 retry.go:31] will retry after 349.299432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:21:09.225793 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.225871 1657011 retry.go:31] will retry after 322.760295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.246600 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:21:09.255342 1657011 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0520 11:21:09.255428 1657011 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0520 11:21:09.315545 1657011 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0520 11:21:09.315635 1657011 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0520 11:21:09.376865 1657011 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0520 11:21:09.376931 1657011 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0520 11:21:09.444471 1657011 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0520 11:21:09.444597 1657011 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0520 11:21:09.447357 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.447454 1657011 retry.go:31] will retry after 165.756904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.479582 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0520 11:21:09.549638 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:21:09.549784 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0520 11:21:09.580846 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.580881 1657011 retry.go:31] will retry after 242.226239ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.614159 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0520 11:21:09.679350 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.679386 1657011 retry.go:31] will retry after 412.409677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:21:09.699588 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.699622 1657011 retry.go:31] will retry after 369.828426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:21:09.741117 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.741152 1657011 retry.go:31] will retry after 357.759751ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.824006 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0520 11:21:09.913428 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:09.913463 1657011 retry.go:31] will retry after 310.334701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:10.069705 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:21:10.092137 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:21:10.099567 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:21:10.224556 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0520 11:21:10.263834 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:10.263894 1657011 retry.go:31] will retry after 606.429247ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:21:10.293228 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:10.293297 1657011 retry.go:31] will retry after 837.450796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:21:10.293387 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:10.293403 1657011 retry.go:31] will retry after 281.84287ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:21:10.318094 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:10.318126 1657011 retry.go:31] will retry after 665.286348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:10.575466 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0520 11:21:10.649802 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:10.649883 1657011 retry.go:31] will retry after 1.039769389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:10.766426 1657011 node_ready.go:53] error getting node "old-k8s-version-776336": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-776336": dial tcp 192.168.76.2:8443: connect: connection refused
	I0520 11:21:10.870801 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0520 11:21:10.959202 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:10.959233 1657011 retry.go:31] will retry after 763.872743ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:10.984578 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0520 11:21:11.082742 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:11.082777 1657011 retry.go:31] will retry after 974.766668ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:11.130921 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0520 11:21:11.201959 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:11.202004 1657011 retry.go:31] will retry after 736.761943ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:11.689902 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:21:11.723299 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0520 11:21:11.778061 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:11.778094 1657011 retry.go:31] will retry after 1.448174079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:21:11.819486 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:11.819523 1657011 retry.go:31] will retry after 1.238034716s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:11.939785 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0520 11:21:12.047264 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:12.047296 1657011 retry.go:31] will retry after 1.867321261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:12.058631 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0520 11:21:12.144303 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:12.144343 1657011 retry.go:31] will retry after 1.771793852s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:13.057845 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0520 11:21:13.128513 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:13.128559 1657011 retry.go:31] will retry after 2.390076235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:13.226744 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:21:13.265701 1657011 node_ready.go:53] error getting node "old-k8s-version-776336": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-776336": dial tcp 192.168.76.2:8443: connect: connection refused
	W0520 11:21:13.312990 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:13.313025 1657011 retry.go:31] will retry after 1.489908337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:13.915353 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:21:13.916597 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0520 11:21:14.080904 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:14.080938 1657011 retry.go:31] will retry after 2.404769744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:21:14.080990 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:14.081014 1657011 retry.go:31] will retry after 1.136852986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:14.803704 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0520 11:21:14.883832 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:14.883867 1657011 retry.go:31] will retry after 2.485984396s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:15.218512 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0520 11:21:15.266227 1657011 node_ready.go:53] error getting node "old-k8s-version-776336": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-776336": dial tcp 192.168.76.2:8443: connect: connection refused
	W0520 11:21:15.297146 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:15.297235 1657011 retry.go:31] will retry after 2.854456986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:15.519622 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0520 11:21:15.617385 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:15.617417 1657011 retry.go:31] will retry after 2.487422181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:16.486385 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0520 11:21:16.573664 1657011 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:16.573696 1657011 retry.go:31] will retry after 2.715791777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:21:17.370946 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:21:18.105537 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:21:18.151885 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0520 11:21:19.289917 1657011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:21:26.846851 1657011 node_ready.go:49] node "old-k8s-version-776336" has status "Ready":"True"
	I0520 11:21:26.846889 1657011 node_ready.go:38] duration metric: took 18.081905274s for node "old-k8s-version-776336" to be "Ready" ...
	I0520 11:21:26.846901 1657011 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:21:27.239577 1657011 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-grqv6" in "kube-system" namespace to be "Ready" ...
	I0520 11:21:27.423469 1657011 pod_ready.go:92] pod "coredns-74ff55c5b-grqv6" in "kube-system" namespace has status "Ready":"True"
	I0520 11:21:27.423544 1657011 pod_ready.go:81] duration metric: took 183.864981ms for pod "coredns-74ff55c5b-grqv6" in "kube-system" namespace to be "Ready" ...
	I0520 11:21:27.423572 1657011 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-776336" in "kube-system" namespace to be "Ready" ...
	I0520 11:21:27.896045 1657011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.525057577s)
	I0520 11:21:27.896151 1657011 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-776336"
	I0520 11:21:27.904136 1657011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.798563409s)
	I0520 11:21:28.207111 1657011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.055182504s)
	I0520 11:21:28.210337 1657011 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-776336 addons enable metrics-server
	
	I0520 11:21:28.207442 1657011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.917498672s)
	I0520 11:21:28.227317 1657011 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0520 11:21:28.229033 1657011 addons.go:505] duration metric: took 19.782170878s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0520 11:21:29.448191 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:31.929850 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:33.930860 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:35.938903 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:38.430945 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:40.930027 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:42.935139 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:45.429888 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:47.453333 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:49.934917 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:52.440302 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:54.930391 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:57.429693 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:21:59.429891 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:01.431748 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:03.931594 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:06.430454 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:08.432656 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:10.930638 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:12.931011 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:15.430348 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:17.430827 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:19.929880 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:21.931177 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:23.932568 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:26.431806 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:28.936644 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:31.430220 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:33.431604 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:35.932737 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:38.430021 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:40.929116 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:42.929565 1657011 pod_ready.go:102] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:43.429892 1657011 pod_ready.go:92] pod "etcd-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"True"
	I0520 11:22:43.429918 1657011 pod_ready.go:81] duration metric: took 1m16.006307183s for pod "etcd-old-k8s-version-776336" in "kube-system" namespace to be "Ready" ...
	I0520 11:22:43.429932 1657011 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-776336" in "kube-system" namespace to be "Ready" ...
	I0520 11:22:43.435357 1657011 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"True"
	I0520 11:22:43.435383 1657011 pod_ready.go:81] duration metric: took 5.442109ms for pod "kube-apiserver-old-k8s-version-776336" in "kube-system" namespace to be "Ready" ...
	I0520 11:22:43.435394 1657011 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-776336" in "kube-system" namespace to be "Ready" ...
	I0520 11:22:45.441588 1657011 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:47.444715 1657011 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:48.442147 1657011 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"True"
	I0520 11:22:48.442173 1657011 pod_ready.go:81] duration metric: took 5.006771016s for pod "kube-controller-manager-old-k8s-version-776336" in "kube-system" namespace to be "Ready" ...
	I0520 11:22:48.442186 1657011 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5jcm" in "kube-system" namespace to be "Ready" ...
	I0520 11:22:48.447307 1657011 pod_ready.go:92] pod "kube-proxy-f5jcm" in "kube-system" namespace has status "Ready":"True"
	I0520 11:22:48.447335 1657011 pod_ready.go:81] duration metric: took 5.141464ms for pod "kube-proxy-f5jcm" in "kube-system" namespace to be "Ready" ...
	I0520 11:22:48.447348 1657011 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-776336" in "kube-system" namespace to be "Ready" ...
	I0520 11:22:50.453181 1657011 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:52.453245 1657011 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:53.453684 1657011 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-776336" in "kube-system" namespace has status "Ready":"True"
	I0520 11:22:53.453707 1657011 pod_ready.go:81] duration metric: took 5.006351238s for pod "kube-scheduler-old-k8s-version-776336" in "kube-system" namespace to be "Ready" ...
	I0520 11:22:53.453720 1657011 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace to be "Ready" ...
	I0520 11:22:55.460408 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:22:57.960579 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:00.460592 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:02.461451 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:04.960310 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:07.460291 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:09.961363 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:11.961420 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:14.469063 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:16.960336 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:18.961998 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:21.460527 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:23.465331 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:25.962459 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:28.460002 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:30.470083 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:32.959999 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:34.960744 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:36.964586 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:39.459457 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:41.459916 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:43.467088 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:45.959646 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:47.960397 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:49.960943 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:51.961202 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:54.459665 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:56.459759 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:23:58.460158 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:00.469282 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:02.961480 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:04.962348 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:06.963238 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:09.459541 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:11.459759 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:13.464852 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:15.960315 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:17.984793 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:20.459562 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:22.959892 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:24.961724 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:26.961989 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:29.459873 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:31.459945 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:33.460515 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:35.461080 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:37.961004 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:40.459835 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:42.460277 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:44.460663 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:46.961021 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:49.460382 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:51.960497 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:53.961478 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:56.459744 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:24:58.459895 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:00.461536 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:02.960842 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:04.962221 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:07.460632 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:09.460701 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:11.960364 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:14.460438 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:16.960303 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:19.460034 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:21.959756 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:23.960250 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:26.460584 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:28.460632 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:30.961098 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:33.459149 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:35.460844 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:37.959845 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:39.960767 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:42.465130 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:44.960612 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:46.961023 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:49.459860 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:51.460063 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:53.960417 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:56.460157 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:25:58.960740 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:01.459801 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:03.961025 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:06.459923 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:08.460480 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:10.960749 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:13.460226 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:15.460581 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:17.960723 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:19.963881 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:22.460340 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:24.460460 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:26.959649 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:29.460571 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:31.961575 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:34.460217 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:36.960497 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:39.459506 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:41.459556 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:43.460197 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:45.460974 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:47.961034 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:50.460261 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:52.960497 1657011 pod_ready.go:102] pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace has status "Ready":"False"
	I0520 11:26:53.459870 1657011 pod_ready.go:81] duration metric: took 4m0.006138223s for pod "metrics-server-9975d5f86-qdks4" in "kube-system" namespace to be "Ready" ...
	E0520 11:26:53.459893 1657011 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:26:53.459903 1657011 pod_ready.go:38] duration metric: took 5m26.612991586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:26:53.459918 1657011 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:26:53.459948 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:26:53.460029 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:26:53.512214 1657011 cri.go:89] found id: "0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3"
	I0520 11:26:53.512248 1657011 cri.go:89] found id: ""
	I0520 11:26:53.512258 1657011 logs.go:276] 1 containers: [0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3]
	I0520 11:26:53.512331 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:26:53.516251 1657011 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:26:53.516328 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:26:53.556211 1657011 cri.go:89] found id: "a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4"
	I0520 11:26:53.556234 1657011 cri.go:89] found id: ""
	I0520 11:26:53.556243 1657011 logs.go:276] 1 containers: [a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4]
	I0520 11:26:53.556303 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:26:53.559924 1657011 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:26:53.559997 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:26:53.609276 1657011 cri.go:89] found id: "e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853"
	I0520 11:26:53.609300 1657011 cri.go:89] found id: ""
	I0520 11:26:53.609308 1657011 logs.go:276] 1 containers: [e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853]
	I0520 11:26:53.609365 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:26:53.613170 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:26:53.613243 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:26:53.667196 1657011 cri.go:89] found id: "200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d"
	I0520 11:26:53.667216 1657011 cri.go:89] found id: ""
	I0520 11:26:53.667225 1657011 logs.go:276] 1 containers: [200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d]
	I0520 11:26:53.667291 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:26:53.671970 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:26:53.672079 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:26:53.728222 1657011 cri.go:89] found id: "b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a"
	I0520 11:26:53.728245 1657011 cri.go:89] found id: ""
	I0520 11:26:53.728253 1657011 logs.go:276] 1 containers: [b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a]
	I0520 11:26:53.728311 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:26:53.732459 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:26:53.732529 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:26:53.795721 1657011 cri.go:89] found id: "7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e"
	I0520 11:26:53.795745 1657011 cri.go:89] found id: ""
	I0520 11:26:53.795755 1657011 logs.go:276] 1 containers: [7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e]
	I0520 11:26:53.795813 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:26:53.799762 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:26:53.799828 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:26:53.854711 1657011 cri.go:89] found id: "b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67"
	I0520 11:26:53.854734 1657011 cri.go:89] found id: ""
	I0520 11:26:53.854743 1657011 logs.go:276] 1 containers: [b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67]
	I0520 11:26:53.854797 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:26:53.858901 1657011 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:26:53.858974 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:26:53.917739 1657011 cri.go:89] found id: "80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607"
	I0520 11:26:53.917766 1657011 cri.go:89] found id: ""
	I0520 11:26:53.917774 1657011 logs.go:276] 1 containers: [80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607]
	I0520 11:26:53.917830 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:26:53.923141 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:26:53.923214 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:26:53.978276 1657011 cri.go:89] found id: "478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973"
	I0520 11:26:53.978300 1657011 cri.go:89] found id: ""
	I0520 11:26:53.978307 1657011 logs.go:276] 1 containers: [478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973]
	I0520 11:26:53.978364 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:26:53.982146 1657011 logs.go:123] Gathering logs for coredns [e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853] ...
	I0520 11:26:53.982173 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853"
	I0520 11:26:54.050796 1657011 logs.go:123] Gathering logs for kube-scheduler [200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d] ...
	I0520 11:26:54.050827 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d"
	I0520 11:26:54.107903 1657011 logs.go:123] Gathering logs for container status ...
	I0520 11:26:54.107935 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:26:54.188486 1657011 logs.go:123] Gathering logs for kube-apiserver [0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3] ...
	I0520 11:26:54.188568 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3"
	I0520 11:26:54.324072 1657011 logs.go:123] Gathering logs for kubernetes-dashboard [478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973] ...
	I0520 11:26:54.324107 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973"
	I0520 11:26:54.392484 1657011 logs.go:123] Gathering logs for storage-provisioner [80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607] ...
	I0520 11:26:54.392514 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607"
	I0520 11:26:54.451574 1657011 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:26:54.451601 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:26:54.560291 1657011 logs.go:123] Gathering logs for kube-proxy [b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a] ...
	I0520 11:26:54.560368 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a"
	I0520 11:26:54.618045 1657011 logs.go:123] Gathering logs for kube-controller-manager [7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e] ...
	I0520 11:26:54.618073 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e"
	I0520 11:26:54.712023 1657011 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:26:54.712071 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:26:54.921105 1657011 logs.go:123] Gathering logs for etcd [a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4] ...
	I0520 11:26:54.921136 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4"
	I0520 11:26:54.992013 1657011 logs.go:123] Gathering logs for kindnet [b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67] ...
	I0520 11:26:54.992090 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67"
	I0520 11:26:55.067881 1657011 logs.go:123] Gathering logs for kubelet ...
	I0520 11:26:55.067944 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 11:26:55.133533 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.809242     730 reflector.go:138] object-"kube-system"/"kube-proxy-token-dxbg4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-dxbg4" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:26:55.133818 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815246     730 reflector.go:138] object-"kube-system"/"metrics-server-token-lstwk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-lstwk" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:26:55.134043 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815362     730 reflector.go:138] object-"kube-system"/"kindnet-token-5d2mm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5d2mm" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:26:55.134255 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815419     730 reflector.go:138] object-"default"/"default-token-2c9fs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2c9fs" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:26:55.134465 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815480     730 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:26:55.134668 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815526     730 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:26:55.134881 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815596     730 reflector.go:138] object-"kube-system"/"coredns-token-8nh95": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-8nh95" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:26:55.135118 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.818042     730 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jt6tq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jt6tq" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:26:55.146321 1657011 logs.go:138] Found kubelet problem: May 20 11:21:30 old-k8s-version-776336 kubelet[730]: E0520 11:21:30.748462     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:26:55.146532 1657011 logs.go:138] Found kubelet problem: May 20 11:21:31 old-k8s-version-776336 kubelet[730]: E0520 11:21:31.360081     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.148610 1657011 logs.go:138] Found kubelet problem: May 20 11:21:42 old-k8s-version-776336 kubelet[730]: E0520 11:21:42.354554     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:26:55.151115 1657011 logs.go:138] Found kubelet problem: May 20 11:21:55 old-k8s-version-776336 kubelet[730]: E0520 11:21:55.457689     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.151679 1657011 logs.go:138] Found kubelet problem: May 20 11:21:56 old-k8s-version-776336 kubelet[730]: E0520 11:21:56.460106     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.151899 1657011 logs.go:138] Found kubelet problem: May 20 11:21:57 old-k8s-version-776336 kubelet[730]: E0520 11:21:57.311677     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.152294 1657011 logs.go:138] Found kubelet problem: May 20 11:22:02 old-k8s-version-776336 kubelet[730]: E0520 11:22:02.482262     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.154737 1657011 logs.go:138] Found kubelet problem: May 20 11:22:08 old-k8s-version-776336 kubelet[730]: E0520 11:22:08.322783     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:26:55.155443 1657011 logs.go:138] Found kubelet problem: May 20 11:22:17 old-k8s-version-776336 kubelet[730]: E0520 11:22:17.507098     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.155816 1657011 logs.go:138] Found kubelet problem: May 20 11:22:22 old-k8s-version-776336 kubelet[730]: E0520 11:22:22.482798     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.156031 1657011 logs.go:138] Found kubelet problem: May 20 11:22:23 old-k8s-version-776336 kubelet[730]: E0520 11:22:23.311553     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.156396 1657011 logs.go:138] Found kubelet problem: May 20 11:22:36 old-k8s-version-776336 kubelet[730]: E0520 11:22:36.311591     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.156622 1657011 logs.go:138] Found kubelet problem: May 20 11:22:38 old-k8s-version-776336 kubelet[730]: E0520 11:22:38.312311     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.157224 1657011 logs.go:138] Found kubelet problem: May 20 11:22:47 old-k8s-version-776336 kubelet[730]: E0520 11:22:47.567324     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.157559 1657011 logs.go:138] Found kubelet problem: May 20 11:22:52 old-k8s-version-776336 kubelet[730]: E0520 11:22:52.482120     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.159827 1657011 logs.go:138] Found kubelet problem: May 20 11:22:53 old-k8s-version-776336 kubelet[730]: E0520 11:22:53.323112     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:26:55.160225 1657011 logs.go:138] Found kubelet problem: May 20 11:23:05 old-k8s-version-776336 kubelet[730]: E0520 11:23:05.311216     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.160435 1657011 logs.go:138] Found kubelet problem: May 20 11:23:05 old-k8s-version-776336 kubelet[730]: E0520 11:23:05.312235     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.160807 1657011 logs.go:138] Found kubelet problem: May 20 11:23:18 old-k8s-version-776336 kubelet[730]: E0520 11:23:18.311005     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.161029 1657011 logs.go:138] Found kubelet problem: May 20 11:23:19 old-k8s-version-776336 kubelet[730]: E0520 11:23:19.311931     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.161510 1657011 logs.go:138] Found kubelet problem: May 20 11:23:34 old-k8s-version-776336 kubelet[730]: E0520 11:23:34.312149     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.161870 1657011 logs.go:138] Found kubelet problem: May 20 11:23:34 old-k8s-version-776336 kubelet[730]: E0520 11:23:34.647790     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.162206 1657011 logs.go:138] Found kubelet problem: May 20 11:23:42 old-k8s-version-776336 kubelet[730]: E0520 11:23:42.482249     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.162407 1657011 logs.go:138] Found kubelet problem: May 20 11:23:48 old-k8s-version-776336 kubelet[730]: E0520 11:23:48.311989     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.162742 1657011 logs.go:138] Found kubelet problem: May 20 11:23:54 old-k8s-version-776336 kubelet[730]: E0520 11:23:54.311066     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.162934 1657011 logs.go:138] Found kubelet problem: May 20 11:24:02 old-k8s-version-776336 kubelet[730]: E0520 11:24:02.312243     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.163359 1657011 logs.go:138] Found kubelet problem: May 20 11:24:06 old-k8s-version-776336 kubelet[730]: E0520 11:24:06.313386     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.166411 1657011 logs.go:138] Found kubelet problem: May 20 11:24:17 old-k8s-version-776336 kubelet[730]: E0520 11:24:17.319931     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:26:55.166751 1657011 logs.go:138] Found kubelet problem: May 20 11:24:19 old-k8s-version-776336 kubelet[730]: E0520 11:24:19.310996     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.166939 1657011 logs.go:138] Found kubelet problem: May 20 11:24:28 old-k8s-version-776336 kubelet[730]: E0520 11:24:28.311548     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.167275 1657011 logs.go:138] Found kubelet problem: May 20 11:24:32 old-k8s-version-776336 kubelet[730]: E0520 11:24:32.310984     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.167608 1657011 logs.go:138] Found kubelet problem: May 20 11:24:43 old-k8s-version-776336 kubelet[730]: E0520 11:24:43.311302     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.167796 1657011 logs.go:138] Found kubelet problem: May 20 11:24:43 old-k8s-version-776336 kubelet[730]: E0520 11:24:43.312650     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.168116 1657011 logs.go:138] Found kubelet problem: May 20 11:24:58 old-k8s-version-776336 kubelet[730]: E0520 11:24:58.313801     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.168578 1657011 logs.go:138] Found kubelet problem: May 20 11:24:59 old-k8s-version-776336 kubelet[730]: E0520 11:24:59.771509     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.168911 1657011 logs.go:138] Found kubelet problem: May 20 11:25:02 old-k8s-version-776336 kubelet[730]: E0520 11:25:02.482351     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.169100 1657011 logs.go:138] Found kubelet problem: May 20 11:25:13 old-k8s-version-776336 kubelet[730]: E0520 11:25:13.311740     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.169433 1657011 logs.go:138] Found kubelet problem: May 20 11:25:14 old-k8s-version-776336 kubelet[730]: E0520 11:25:14.311041     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.170301 1657011 logs.go:138] Found kubelet problem: May 20 11:25:26 old-k8s-version-776336 kubelet[730]: E0520 11:25:26.311269     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.170493 1657011 logs.go:138] Found kubelet problem: May 20 11:25:28 old-k8s-version-776336 kubelet[730]: E0520 11:25:28.311449     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.170825 1657011 logs.go:138] Found kubelet problem: May 20 11:25:40 old-k8s-version-776336 kubelet[730]: E0520 11:25:40.311409     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.171052 1657011 logs.go:138] Found kubelet problem: May 20 11:25:40 old-k8s-version-776336 kubelet[730]: E0520 11:25:40.312759     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.171384 1657011 logs.go:138] Found kubelet problem: May 20 11:25:52 old-k8s-version-776336 kubelet[730]: E0520 11:25:52.311113     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.171571 1657011 logs.go:138] Found kubelet problem: May 20 11:25:53 old-k8s-version-776336 kubelet[730]: E0520 11:25:53.311487     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.171903 1657011 logs.go:138] Found kubelet problem: May 20 11:26:05 old-k8s-version-776336 kubelet[730]: E0520 11:26:05.311044     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.172108 1657011 logs.go:138] Found kubelet problem: May 20 11:26:07 old-k8s-version-776336 kubelet[730]: E0520 11:26:07.311454     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.172691 1657011 logs.go:138] Found kubelet problem: May 20 11:26:17 old-k8s-version-776336 kubelet[730]: E0520 11:26:17.312403     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.172876 1657011 logs.go:138] Found kubelet problem: May 20 11:26:22 old-k8s-version-776336 kubelet[730]: E0520 11:26:22.311955     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.173205 1657011 logs.go:138] Found kubelet problem: May 20 11:26:29 old-k8s-version-776336 kubelet[730]: E0520 11:26:29.310991     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.173391 1657011 logs.go:138] Found kubelet problem: May 20 11:26:35 old-k8s-version-776336 kubelet[730]: E0520 11:26:35.311614     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.173732 1657011 logs.go:138] Found kubelet problem: May 20 11:26:40 old-k8s-version-776336 kubelet[730]: E0520 11:26:40.310989     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.173917 1657011 logs.go:138] Found kubelet problem: May 20 11:26:46 old-k8s-version-776336 kubelet[730]: E0520 11:26:46.315105     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.174246 1657011 logs.go:138] Found kubelet problem: May 20 11:26:53 old-k8s-version-776336 kubelet[730]: E0520 11:26:53.310994     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	I0520 11:26:55.174257 1657011 logs.go:123] Gathering logs for dmesg ...
	I0520 11:26:55.174271 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:26:55.194470 1657011 out.go:304] Setting ErrFile to fd 2...
	I0520 11:26:55.194497 1657011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 11:26:55.195619 1657011 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0520 11:26:55.195656 1657011 out.go:239]   May 20 11:26:29 old-k8s-version-776336 kubelet[730]: E0520 11:26:29.310991     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	  May 20 11:26:29 old-k8s-version-776336 kubelet[730]: E0520 11:26:29.310991     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.195665 1657011 out.go:239]   May 20 11:26:35 old-k8s-version-776336 kubelet[730]: E0520 11:26:35.311614     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 20 11:26:35 old-k8s-version-776336 kubelet[730]: E0520 11:26:35.311614     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.195675 1657011 out.go:239]   May 20 11:26:40 old-k8s-version-776336 kubelet[730]: E0520 11:26:40.310989     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	  May 20 11:26:40 old-k8s-version-776336 kubelet[730]: E0520 11:26:40.310989     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:26:55.195689 1657011 out.go:239]   May 20 11:26:46 old-k8s-version-776336 kubelet[730]: E0520 11:26:46.315105     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 20 11:26:46 old-k8s-version-776336 kubelet[730]: E0520 11:26:46.315105     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:26:55.195707 1657011 out.go:239]   May 20 11:26:53 old-k8s-version-776336 kubelet[730]: E0520 11:26:53.310994     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	  May 20 11:26:53 old-k8s-version-776336 kubelet[730]: E0520 11:26:53.310994     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	I0520 11:26:55.195714 1657011 out.go:304] Setting ErrFile to fd 2...
	I0520 11:26:55.195724 1657011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:27:05.196731 1657011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:27:05.209371 1657011 api_server.go:72] duration metric: took 5m56.762872229s to wait for apiserver process to appear ...
	I0520 11:27:05.209406 1657011 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:27:05.209441 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:27:05.209503 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:27:05.254986 1657011 cri.go:89] found id: "0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3"
	I0520 11:27:05.255010 1657011 cri.go:89] found id: ""
	I0520 11:27:05.255018 1657011 logs.go:276] 1 containers: [0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3]
	I0520 11:27:05.255097 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.258937 1657011 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:27:05.259014 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:27:05.304683 1657011 cri.go:89] found id: "a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4"
	I0520 11:27:05.304705 1657011 cri.go:89] found id: ""
	I0520 11:27:05.304712 1657011 logs.go:276] 1 containers: [a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4]
	I0520 11:27:05.304776 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.308619 1657011 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:27:05.308696 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:27:05.363412 1657011 cri.go:89] found id: "e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853"
	I0520 11:27:05.363436 1657011 cri.go:89] found id: ""
	I0520 11:27:05.363445 1657011 logs.go:276] 1 containers: [e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853]
	I0520 11:27:05.363501 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.368214 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:27:05.368293 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:27:05.406961 1657011 cri.go:89] found id: "200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d"
	I0520 11:27:05.406985 1657011 cri.go:89] found id: ""
	I0520 11:27:05.407004 1657011 logs.go:276] 1 containers: [200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d]
	I0520 11:27:05.407109 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.410985 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:27:05.411075 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:27:05.452948 1657011 cri.go:89] found id: "b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a"
	I0520 11:27:05.452972 1657011 cri.go:89] found id: ""
	I0520 11:27:05.452981 1657011 logs.go:276] 1 containers: [b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a]
	I0520 11:27:05.453039 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.456828 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:27:05.456935 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:27:05.496976 1657011 cri.go:89] found id: "7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e"
	I0520 11:27:05.496998 1657011 cri.go:89] found id: ""
	I0520 11:27:05.497006 1657011 logs.go:276] 1 containers: [7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e]
	I0520 11:27:05.497084 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.500811 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:27:05.500907 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:27:05.543135 1657011 cri.go:89] found id: "b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67"
	I0520 11:27:05.543154 1657011 cri.go:89] found id: ""
	I0520 11:27:05.543162 1657011 logs.go:276] 1 containers: [b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67]
	I0520 11:27:05.543221 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.547142 1657011 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:27:05.547234 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:27:05.606997 1657011 cri.go:89] found id: "80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607"
	I0520 11:27:05.607065 1657011 cri.go:89] found id: ""
	I0520 11:27:05.607080 1657011 logs.go:276] 1 containers: [80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607]
	I0520 11:27:05.607144 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.610756 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:27:05.610852 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:27:05.656696 1657011 cri.go:89] found id: "478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973"
	I0520 11:27:05.656718 1657011 cri.go:89] found id: ""
	I0520 11:27:05.656726 1657011 logs.go:276] 1 containers: [478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973]
	I0520 11:27:05.656785 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.660781 1657011 logs.go:123] Gathering logs for coredns [e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853] ...
	I0520 11:27:05.660806 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853"
	I0520 11:27:05.702546 1657011 logs.go:123] Gathering logs for kubernetes-dashboard [478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973] ...
	I0520 11:27:05.702574 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973"
	I0520 11:27:05.745356 1657011 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:27:05.745387 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:27:05.825181 1657011 logs.go:123] Gathering logs for kube-scheduler [200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d] ...
	I0520 11:27:05.825267 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d"
	I0520 11:27:05.870836 1657011 logs.go:123] Gathering logs for kindnet [b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67] ...
	I0520 11:27:05.870867 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67"
	I0520 11:27:05.915034 1657011 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:27:05.915072 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:27:06.107107 1657011 logs.go:123] Gathering logs for kube-apiserver [0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3] ...
	I0520 11:27:06.107145 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3"
	I0520 11:27:06.202380 1657011 logs.go:123] Gathering logs for kube-proxy [b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a] ...
	I0520 11:27:06.202416 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a"
	I0520 11:27:06.263898 1657011 logs.go:123] Gathering logs for kube-controller-manager [7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e] ...
	I0520 11:27:06.263929 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e"
	I0520 11:27:06.385051 1657011 logs.go:123] Gathering logs for container status ...
	I0520 11:27:06.385089 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:27:06.468248 1657011 logs.go:123] Gathering logs for kubelet ...
	I0520 11:27:06.468281 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 11:27:06.537324 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.809242     730 reflector.go:138] object-"kube-system"/"kube-proxy-token-dxbg4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-dxbg4" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.537613 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815246     730 reflector.go:138] object-"kube-system"/"metrics-server-token-lstwk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-lstwk" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.537878 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815362     730 reflector.go:138] object-"kube-system"/"kindnet-token-5d2mm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5d2mm" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.538107 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815419     730 reflector.go:138] object-"default"/"default-token-2c9fs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2c9fs" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.538331 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815480     730 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.538553 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815526     730 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.538781 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815596     730 reflector.go:138] object-"kube-system"/"coredns-token-8nh95": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-8nh95" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.539091 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.818042     730 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jt6tq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jt6tq" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.551716 1657011 logs.go:138] Found kubelet problem: May 20 11:21:30 old-k8s-version-776336 kubelet[730]: E0520 11:21:30.748462     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:27:06.552134 1657011 logs.go:138] Found kubelet problem: May 20 11:21:31 old-k8s-version-776336 kubelet[730]: E0520 11:21:31.360081     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.554483 1657011 logs.go:138] Found kubelet problem: May 20 11:21:42 old-k8s-version-776336 kubelet[730]: E0520 11:21:42.354554     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:27:06.556998 1657011 logs.go:138] Found kubelet problem: May 20 11:21:55 old-k8s-version-776336 kubelet[730]: E0520 11:21:55.457689     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.557750 1657011 logs.go:138] Found kubelet problem: May 20 11:21:56 old-k8s-version-776336 kubelet[730]: E0520 11:21:56.460106     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.557970 1657011 logs.go:138] Found kubelet problem: May 20 11:21:57 old-k8s-version-776336 kubelet[730]: E0520 11:21:57.311677     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.558322 1657011 logs.go:138] Found kubelet problem: May 20 11:22:02 old-k8s-version-776336 kubelet[730]: E0520 11:22:02.482262     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.562435 1657011 logs.go:138] Found kubelet problem: May 20 11:22:08 old-k8s-version-776336 kubelet[730]: E0520 11:22:08.322783     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:27:06.563063 1657011 logs.go:138] Found kubelet problem: May 20 11:22:17 old-k8s-version-776336 kubelet[730]: E0520 11:22:17.507098     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.563422 1657011 logs.go:138] Found kubelet problem: May 20 11:22:22 old-k8s-version-776336 kubelet[730]: E0520 11:22:22.482798     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.563636 1657011 logs.go:138] Found kubelet problem: May 20 11:22:23 old-k8s-version-776336 kubelet[730]: E0520 11:22:23.311553     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.563979 1657011 logs.go:138] Found kubelet problem: May 20 11:22:36 old-k8s-version-776336 kubelet[730]: E0520 11:22:36.311591     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.564191 1657011 logs.go:138] Found kubelet problem: May 20 11:22:38 old-k8s-version-776336 kubelet[730]: E0520 11:22:38.312311     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.565819 1657011 logs.go:138] Found kubelet problem: May 20 11:22:47 old-k8s-version-776336 kubelet[730]: E0520 11:22:47.567324     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.566199 1657011 logs.go:138] Found kubelet problem: May 20 11:22:52 old-k8s-version-776336 kubelet[730]: E0520 11:22:52.482120     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.568339 1657011 logs.go:138] Found kubelet problem: May 20 11:22:53 old-k8s-version-776336 kubelet[730]: E0520 11:22:53.323112     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:27:06.568700 1657011 logs.go:138] Found kubelet problem: May 20 11:23:05 old-k8s-version-776336 kubelet[730]: E0520 11:23:05.311216     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.568889 1657011 logs.go:138] Found kubelet problem: May 20 11:23:05 old-k8s-version-776336 kubelet[730]: E0520 11:23:05.312235     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.569217 1657011 logs.go:138] Found kubelet problem: May 20 11:23:18 old-k8s-version-776336 kubelet[730]: E0520 11:23:18.311005     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.569422 1657011 logs.go:138] Found kubelet problem: May 20 11:23:19 old-k8s-version-776336 kubelet[730]: E0520 11:23:19.311931     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.569956 1657011 logs.go:138] Found kubelet problem: May 20 11:23:34 old-k8s-version-776336 kubelet[730]: E0520 11:23:34.312149     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.570314 1657011 logs.go:138] Found kubelet problem: May 20 11:23:34 old-k8s-version-776336 kubelet[730]: E0520 11:23:34.647790     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.570675 1657011 logs.go:138] Found kubelet problem: May 20 11:23:42 old-k8s-version-776336 kubelet[730]: E0520 11:23:42.482249     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.570884 1657011 logs.go:138] Found kubelet problem: May 20 11:23:48 old-k8s-version-776336 kubelet[730]: E0520 11:23:48.311989     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.571243 1657011 logs.go:138] Found kubelet problem: May 20 11:23:54 old-k8s-version-776336 kubelet[730]: E0520 11:23:54.311066     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.571465 1657011 logs.go:138] Found kubelet problem: May 20 11:24:02 old-k8s-version-776336 kubelet[730]: E0520 11:24:02.312243     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.571815 1657011 logs.go:138] Found kubelet problem: May 20 11:24:06 old-k8s-version-776336 kubelet[730]: E0520 11:24:06.313386     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.574161 1657011 logs.go:138] Found kubelet problem: May 20 11:24:17 old-k8s-version-776336 kubelet[730]: E0520 11:24:17.319931     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:27:06.574523 1657011 logs.go:138] Found kubelet problem: May 20 11:24:19 old-k8s-version-776336 kubelet[730]: E0520 11:24:19.310996     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.574733 1657011 logs.go:138] Found kubelet problem: May 20 11:24:28 old-k8s-version-776336 kubelet[730]: E0520 11:24:28.311548     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.575111 1657011 logs.go:138] Found kubelet problem: May 20 11:24:32 old-k8s-version-776336 kubelet[730]: E0520 11:24:32.310984     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.575465 1657011 logs.go:138] Found kubelet problem: May 20 11:24:43 old-k8s-version-776336 kubelet[730]: E0520 11:24:43.311302     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.575675 1657011 logs.go:138] Found kubelet problem: May 20 11:24:43 old-k8s-version-776336 kubelet[730]: E0520 11:24:43.312650     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.576077 1657011 logs.go:138] Found kubelet problem: May 20 11:24:58 old-k8s-version-776336 kubelet[730]: E0520 11:24:58.313801     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.576602 1657011 logs.go:138] Found kubelet problem: May 20 11:24:59 old-k8s-version-776336 kubelet[730]: E0520 11:24:59.771509     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.577160 1657011 logs.go:138] Found kubelet problem: May 20 11:25:02 old-k8s-version-776336 kubelet[730]: E0520 11:25:02.482351     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.577361 1657011 logs.go:138] Found kubelet problem: May 20 11:25:13 old-k8s-version-776336 kubelet[730]: E0520 11:25:13.311740     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.577700 1657011 logs.go:138] Found kubelet problem: May 20 11:25:14 old-k8s-version-776336 kubelet[730]: E0520 11:25:14.311041     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.578552 1657011 logs.go:138] Found kubelet problem: May 20 11:25:26 old-k8s-version-776336 kubelet[730]: E0520 11:25:26.311269     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.578740 1657011 logs.go:138] Found kubelet problem: May 20 11:25:28 old-k8s-version-776336 kubelet[730]: E0520 11:25:28.311449     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.579075 1657011 logs.go:138] Found kubelet problem: May 20 11:25:40 old-k8s-version-776336 kubelet[730]: E0520 11:25:40.311409     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.579263 1657011 logs.go:138] Found kubelet problem: May 20 11:25:40 old-k8s-version-776336 kubelet[730]: E0520 11:25:40.312759     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.579647 1657011 logs.go:138] Found kubelet problem: May 20 11:25:52 old-k8s-version-776336 kubelet[730]: E0520 11:25:52.311113     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.579896 1657011 logs.go:138] Found kubelet problem: May 20 11:25:53 old-k8s-version-776336 kubelet[730]: E0520 11:25:53.311487     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.580290 1657011 logs.go:138] Found kubelet problem: May 20 11:26:05 old-k8s-version-776336 kubelet[730]: E0520 11:26:05.311044     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.580502 1657011 logs.go:138] Found kubelet problem: May 20 11:26:07 old-k8s-version-776336 kubelet[730]: E0520 11:26:07.311454     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.581101 1657011 logs.go:138] Found kubelet problem: May 20 11:26:17 old-k8s-version-776336 kubelet[730]: E0520 11:26:17.312403     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.581310 1657011 logs.go:138] Found kubelet problem: May 20 11:26:22 old-k8s-version-776336 kubelet[730]: E0520 11:26:22.311955     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.581693 1657011 logs.go:138] Found kubelet problem: May 20 11:26:29 old-k8s-version-776336 kubelet[730]: E0520 11:26:29.310991     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.581904 1657011 logs.go:138] Found kubelet problem: May 20 11:26:35 old-k8s-version-776336 kubelet[730]: E0520 11:26:35.311614     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.582279 1657011 logs.go:138] Found kubelet problem: May 20 11:26:40 old-k8s-version-776336 kubelet[730]: E0520 11:26:40.310989     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.582688 1657011 logs.go:138] Found kubelet problem: May 20 11:26:46 old-k8s-version-776336 kubelet[730]: E0520 11:26:46.315105     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.583045 1657011 logs.go:138] Found kubelet problem: May 20 11:26:53 old-k8s-version-776336 kubelet[730]: E0520 11:26:53.310994     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.583283 1657011 logs.go:138] Found kubelet problem: May 20 11:26:57 old-k8s-version-776336 kubelet[730]: E0520 11:26:57.312314     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.583664 1657011 logs.go:138] Found kubelet problem: May 20 11:27:06 old-k8s-version-776336 kubelet[730]: E0520 11:27:06.311519     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	I0520 11:27:06.583679 1657011 logs.go:123] Gathering logs for dmesg ...
	I0520 11:27:06.583709 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:27:06.607566 1657011 logs.go:123] Gathering logs for etcd [a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4] ...
	I0520 11:27:06.607599 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4"
	I0520 11:27:06.704593 1657011 logs.go:123] Gathering logs for storage-provisioner [80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607] ...
	I0520 11:27:06.704637 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607"
	I0520 11:27:06.755173 1657011 out.go:304] Setting ErrFile to fd 2...
	I0520 11:27:06.755199 1657011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 11:27:06.755246 1657011 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0520 11:27:06.755260 1657011 out.go:239]   May 20 11:26:40 old-k8s-version-776336 kubelet[730]: E0520 11:26:40.310989     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	  May 20 11:26:40 old-k8s-version-776336 kubelet[730]: E0520 11:26:40.310989     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.755267 1657011 out.go:239]   May 20 11:26:46 old-k8s-version-776336 kubelet[730]: E0520 11:26:46.315105     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 20 11:26:46 old-k8s-version-776336 kubelet[730]: E0520 11:26:46.315105     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.755278 1657011 out.go:239]   May 20 11:26:53 old-k8s-version-776336 kubelet[730]: E0520 11:26:53.310994     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	  May 20 11:26:53 old-k8s-version-776336 kubelet[730]: E0520 11:26:53.310994     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.755303 1657011 out.go:239]   May 20 11:26:57 old-k8s-version-776336 kubelet[730]: E0520 11:26:57.312314     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 20 11:26:57 old-k8s-version-776336 kubelet[730]: E0520 11:26:57.312314     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.755309 1657011 out.go:239]   May 20 11:27:06 old-k8s-version-776336 kubelet[730]: E0520 11:27:06.311519     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	  May 20 11:27:06 old-k8s-version-776336 kubelet[730]: E0520 11:27:06.311519     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	I0520 11:27:06.755316 1657011 out.go:304] Setting ErrFile to fd 2...
	I0520 11:27:06.755327 1657011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:27:16.756505 1657011 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0520 11:27:16.769614 1657011 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0520 11:27:16.771733 1657011 out.go:177] 
	W0520 11:27:16.773260 1657011 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0520 11:27:16.773298 1657011 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0520 11:27:16.773326 1657011 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0520 11:27:16.773332 1657011 out.go:239] * 
	* 
	W0520 11:27:16.775071 1657011 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:27:16.777544 1657011 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-776336 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-776336
helpers_test.go:235: (dbg) docker inspect old-k8s-version-776336:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "79b3e893485d03f6c7fa54c0f1076a4e9f6fac7238dffad483d3ab0503f5a33a",
	        "Created": "2024-05-20T11:17:38.406601646Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1657196,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-20T11:21:00.391994236Z",
	            "FinishedAt": "2024-05-20T11:20:58.801235956Z"
	        },
	        "Image": "sha256:56620e18f2c2c9a0448fc43c42f840334bd2baea497ff8deae66477dd0dbfecf",
	        "ResolvConfPath": "/var/lib/docker/containers/79b3e893485d03f6c7fa54c0f1076a4e9f6fac7238dffad483d3ab0503f5a33a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/79b3e893485d03f6c7fa54c0f1076a4e9f6fac7238dffad483d3ab0503f5a33a/hostname",
	        "HostsPath": "/var/lib/docker/containers/79b3e893485d03f6c7fa54c0f1076a4e9f6fac7238dffad483d3ab0503f5a33a/hosts",
	        "LogPath": "/var/lib/docker/containers/79b3e893485d03f6c7fa54c0f1076a4e9f6fac7238dffad483d3ab0503f5a33a/79b3e893485d03f6c7fa54c0f1076a4e9f6fac7238dffad483d3ab0503f5a33a-json.log",
	        "Name": "/old-k8s-version-776336",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-776336:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-776336",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/45b5a8ecb8f2c11d5a9b10ddffcf25f035d195b50dc82be254c42746a2acc0c0-init/diff:/var/lib/docker/overlay2/85c5c7809a5d893ae54ed3fa4fb6194b99d9d246c69ccb3f2daa2ee41dec0e23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45b5a8ecb8f2c11d5a9b10ddffcf25f035d195b50dc82be254c42746a2acc0c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45b5a8ecb8f2c11d5a9b10ddffcf25f035d195b50dc82be254c42746a2acc0c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45b5a8ecb8f2c11d5a9b10ddffcf25f035d195b50dc82be254c42746a2acc0c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-776336",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-776336/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-776336",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-776336",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-776336",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ffeb38168be77ad7b17ef0de2f5a15b2d751ea5eb99803db3a753620ce84381",
	            "SandboxKey": "/var/run/docker/netns/7ffeb38168be",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-776336": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "90b2363a1f83bdfada5377ba313f690b65fa721d788656514bda9a988666874a",
	                    "EndpointID": "7c177a710029eb56c8b20fcd7d3363f28570dda3ae6afa2172f756ce21c8cf7a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-776336",
	                        "79b3e893485d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-776336 -n old-k8s-version-776336
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-776336 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-776336 logs -n 25: (1.840953073s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cilium-214342                                       | cilium-214342            | jenkins | v1.33.1 | 20 May 24 11:16 UTC | 20 May 24 11:16 UTC |
	| start   | -p cert-expiration-052084                              | cert-expiration-052084   | jenkins | v1.33.1 | 20 May 24 11:16 UTC | 20 May 24 11:16 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-085097                            | force-systemd-env-085097 | jenkins | v1.33.1 | 20 May 24 11:16 UTC | 20 May 24 11:16 UTC |
	| start   | -p cert-options-069594                                 | cert-options-069594      | jenkins | v1.33.1 | 20 May 24 11:16 UTC | 20 May 24 11:17 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	| ssh     | cert-options-069594 ssh                                | cert-options-069594      | jenkins | v1.33.1 | 20 May 24 11:17 UTC | 20 May 24 11:17 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-069594 -- sudo                         | cert-options-069594      | jenkins | v1.33.1 | 20 May 24 11:17 UTC | 20 May 24 11:17 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-069594                                 | cert-options-069594      | jenkins | v1.33.1 | 20 May 24 11:17 UTC | 20 May 24 11:17 UTC |
	| start   | -p old-k8s-version-776336                              | old-k8s-version-776336   | jenkins | v1.33.1 | 20 May 24 11:17 UTC | 20 May 24 11:20 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-052084                              | cert-expiration-052084   | jenkins | v1.33.1 | 20 May 24 11:19 UTC | 20 May 24 11:20 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-052084                              | cert-expiration-052084   | jenkins | v1.33.1 | 20 May 24 11:20 UTC | 20 May 24 11:20 UTC |
	| start   | -p no-preload-027096                                   | no-preload-027096        | jenkins | v1.33.1 | 20 May 24 11:20 UTC | 20 May 24 11:21 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-776336        | old-k8s-version-776336   | jenkins | v1.33.1 | 20 May 24 11:20 UTC | 20 May 24 11:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-776336                              | old-k8s-version-776336   | jenkins | v1.33.1 | 20 May 24 11:20 UTC | 20 May 24 11:20 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-776336             | old-k8s-version-776336   | jenkins | v1.33.1 | 20 May 24 11:20 UTC | 20 May 24 11:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-776336                              | old-k8s-version-776336   | jenkins | v1.33.1 | 20 May 24 11:20 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-027096             | no-preload-027096        | jenkins | v1.33.1 | 20 May 24 11:21 UTC | 20 May 24 11:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-027096                                   | no-preload-027096        | jenkins | v1.33.1 | 20 May 24 11:21 UTC | 20 May 24 11:21 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-027096                  | no-preload-027096        | jenkins | v1.33.1 | 20 May 24 11:21 UTC | 20 May 24 11:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-027096                                   | no-preload-027096        | jenkins | v1.33.1 | 20 May 24 11:21 UTC | 20 May 24 11:26 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                          |         |         |                     |                     |
	| image   | no-preload-027096 image list                           | no-preload-027096        | jenkins | v1.33.1 | 20 May 24 11:26 UTC | 20 May 24 11:26 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-027096                                   | no-preload-027096        | jenkins | v1.33.1 | 20 May 24 11:26 UTC | 20 May 24 11:26 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-027096                                   | no-preload-027096        | jenkins | v1.33.1 | 20 May 24 11:26 UTC | 20 May 24 11:26 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-027096                                   | no-preload-027096        | jenkins | v1.33.1 | 20 May 24 11:26 UTC | 20 May 24 11:26 UTC |
	| delete  | -p no-preload-027096                                   | no-preload-027096        | jenkins | v1.33.1 | 20 May 24 11:26 UTC | 20 May 24 11:27 UTC |
	| start   | -p embed-certs-746238                                  | embed-certs-746238       | jenkins | v1.33.1 | 20 May 24 11:27 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:27:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:27:00.333763 1665508 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:27:00.334142 1665508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:27:00.334175 1665508 out.go:304] Setting ErrFile to fd 2...
	I0520 11:27:00.334182 1665508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:27:00.334872 1665508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 11:27:00.335537 1665508 out.go:298] Setting JSON to false
	I0520 11:27:00.336817 1665508 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":155368,"bootTime":1716049053,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0520 11:27:00.336936 1665508 start.go:139] virtualization:  
	I0520 11:27:00.340285 1665508 out.go:177] * [embed-certs-746238] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 11:27:00.342920 1665508 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:27:00.345583 1665508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:27:00.343103 1665508 notify.go:220] Checking for updates...
	I0520 11:27:00.351132 1665508 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 11:27:00.353823 1665508 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	I0520 11:27:00.362510 1665508 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 11:27:00.366050 1665508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:27:00.370083 1665508 config.go:182] Loaded profile config "old-k8s-version-776336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:27:00.370236 1665508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:27:00.395139 1665508 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 11:27:00.395298 1665508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 11:27:00.470193 1665508 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-20 11:27:00.459110555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 11:27:00.470314 1665508 docker.go:295] overlay module found
	I0520 11:27:00.473002 1665508 out.go:177] * Using the docker driver based on user configuration
	I0520 11:27:00.475402 1665508 start.go:297] selected driver: docker
	I0520 11:27:00.475468 1665508 start.go:901] validating driver "docker" against <nil>
	I0520 11:27:00.475485 1665508 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:27:00.476185 1665508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 11:27:00.530380 1665508 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-20 11:27:00.520975817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 11:27:00.530551 1665508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 11:27:00.530802 1665508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:27:00.533425 1665508 out.go:177] * Using Docker driver with root privileges
	I0520 11:27:00.535901 1665508 cni.go:84] Creating CNI manager for ""
	I0520 11:27:00.535931 1665508 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 11:27:00.535943 1665508 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 11:27:00.536035 1665508 start.go:340] cluster config:
	{Name:embed-certs-746238 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-746238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:27:00.539131 1665508 out.go:177] * Starting "embed-certs-746238" primary control-plane node in "embed-certs-746238" cluster
	I0520 11:27:00.541819 1665508 cache.go:121] Beginning downloading kic base image for docker with crio
	I0520 11:27:00.544666 1665508 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0520 11:27:00.547277 1665508 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:27:00.547343 1665508 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4
	I0520 11:27:00.547355 1665508 cache.go:56] Caching tarball of preloaded images
	I0520 11:27:00.547366 1665508 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0520 11:27:00.547443 1665508 preload.go:173] Found /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0520 11:27:00.547462 1665508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 11:27:00.547572 1665508 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/config.json ...
	I0520 11:27:00.547596 1665508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/config.json: {Name:mkfbada764bf631f21a9b25221ca0dff2ace95c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:27:00.564081 1665508 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0520 11:27:00.564108 1665508 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0520 11:27:00.564133 1665508 cache.go:194] Successfully downloaded all kic artifacts
	I0520 11:27:00.564177 1665508 start.go:360] acquireMachinesLock for embed-certs-746238: {Name:mk07f80be94c2fb7194b76edb6be0c623fcf20e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:27:00.564719 1665508 start.go:364] duration metric: took 515.164µs to acquireMachinesLock for "embed-certs-746238"
	I0520 11:27:00.564757 1665508 start.go:93] Provisioning new machine with config: &{Name:embed-certs-746238 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-746238 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:27:00.564849 1665508 start.go:125] createHost starting for "" (driver="docker")
	I0520 11:27:00.568028 1665508 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0520 11:27:00.568298 1665508 start.go:159] libmachine.API.Create for "embed-certs-746238" (driver="docker")
	I0520 11:27:00.568336 1665508 client.go:168] LocalClient.Create starting
	I0520 11:27:00.568432 1665508 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem
	I0520 11:27:00.568476 1665508 main.go:141] libmachine: Decoding PEM data...
	I0520 11:27:00.568496 1665508 main.go:141] libmachine: Parsing certificate...
	I0520 11:27:00.568554 1665508 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem
	I0520 11:27:00.568579 1665508 main.go:141] libmachine: Decoding PEM data...
	I0520 11:27:00.568589 1665508 main.go:141] libmachine: Parsing certificate...
	I0520 11:27:00.568965 1665508 cli_runner.go:164] Run: docker network inspect embed-certs-746238 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0520 11:27:00.583701 1665508 cli_runner.go:211] docker network inspect embed-certs-746238 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0520 11:27:00.583794 1665508 network_create.go:281] running [docker network inspect embed-certs-746238] to gather additional debugging logs...
	I0520 11:27:00.583823 1665508 cli_runner.go:164] Run: docker network inspect embed-certs-746238
	W0520 11:27:00.598461 1665508 cli_runner.go:211] docker network inspect embed-certs-746238 returned with exit code 1
	I0520 11:27:00.598508 1665508 network_create.go:284] error running [docker network inspect embed-certs-746238]: docker network inspect embed-certs-746238: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-746238 not found
	I0520 11:27:00.598522 1665508 network_create.go:286] output of [docker network inspect embed-certs-746238]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-746238 not found
	
	** /stderr **
	I0520 11:27:00.598637 1665508 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0520 11:27:00.613401 1665508 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c4bd3e0faae IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ec:12:dd:0e} reservation:<nil>}
	I0520 11:27:00.613939 1665508 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-826182de0a36 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:81:f4:78:27} reservation:<nil>}
	I0520 11:27:00.614385 1665508 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-79381957ebaa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:bf:b6:44:9d} reservation:<nil>}
	I0520 11:27:00.614912 1665508 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-90b2363a1f83 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:6c:6f:9f:af} reservation:<nil>}
	I0520 11:27:00.615476 1665508 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017c2190}
	I0520 11:27:00.615505 1665508 network_create.go:124] attempt to create docker network embed-certs-746238 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0520 11:27:00.615572 1665508 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-746238 embed-certs-746238
	I0520 11:27:00.678470 1665508 network_create.go:108] docker network embed-certs-746238 192.168.85.0/24 created
	I0520 11:27:00.678504 1665508 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-746238" container
	I0520 11:27:00.678582 1665508 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0520 11:27:00.693608 1665508 cli_runner.go:164] Run: docker volume create embed-certs-746238 --label name.minikube.sigs.k8s.io=embed-certs-746238 --label created_by.minikube.sigs.k8s.io=true
	I0520 11:27:00.712179 1665508 oci.go:103] Successfully created a docker volume embed-certs-746238
	I0520 11:27:00.712273 1665508 cli_runner.go:164] Run: docker run --rm --name embed-certs-746238-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-746238 --entrypoint /usr/bin/test -v embed-certs-746238:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0520 11:27:01.405030 1665508 oci.go:107] Successfully prepared a docker volume embed-certs-746238
	I0520 11:27:01.405093 1665508 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:27:01.405114 1665508 kic.go:194] Starting extracting preloaded images to volume ...
	I0520 11:27:01.405196 1665508 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-746238:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0520 11:27:05.196731 1657011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:27:05.209371 1657011 api_server.go:72] duration metric: took 5m56.762872229s to wait for apiserver process to appear ...
	I0520 11:27:05.209406 1657011 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:27:05.209441 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:27:05.209503 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:27:05.254986 1657011 cri.go:89] found id: "0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3"
	I0520 11:27:05.255010 1657011 cri.go:89] found id: ""
	I0520 11:27:05.255018 1657011 logs.go:276] 1 containers: [0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3]
	I0520 11:27:05.255097 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.258937 1657011 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:27:05.259014 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:27:05.304683 1657011 cri.go:89] found id: "a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4"
	I0520 11:27:05.304705 1657011 cri.go:89] found id: ""
	I0520 11:27:05.304712 1657011 logs.go:276] 1 containers: [a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4]
	I0520 11:27:05.304776 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.308619 1657011 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:27:05.308696 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:27:05.363412 1657011 cri.go:89] found id: "e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853"
	I0520 11:27:05.363436 1657011 cri.go:89] found id: ""
	I0520 11:27:05.363445 1657011 logs.go:276] 1 containers: [e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853]
	I0520 11:27:05.363501 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.368214 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:27:05.368293 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:27:05.406961 1657011 cri.go:89] found id: "200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d"
	I0520 11:27:05.406985 1657011 cri.go:89] found id: ""
	I0520 11:27:05.407004 1657011 logs.go:276] 1 containers: [200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d]
	I0520 11:27:05.407109 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.410985 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:27:05.411075 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:27:05.452948 1657011 cri.go:89] found id: "b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a"
	I0520 11:27:05.452972 1657011 cri.go:89] found id: ""
	I0520 11:27:05.452981 1657011 logs.go:276] 1 containers: [b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a]
	I0520 11:27:05.453039 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.456828 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:27:05.456935 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:27:05.496976 1657011 cri.go:89] found id: "7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e"
	I0520 11:27:05.496998 1657011 cri.go:89] found id: ""
	I0520 11:27:05.497006 1657011 logs.go:276] 1 containers: [7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e]
	I0520 11:27:05.497084 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.500811 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:27:05.500907 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:27:05.543135 1657011 cri.go:89] found id: "b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67"
	I0520 11:27:05.543154 1657011 cri.go:89] found id: ""
	I0520 11:27:05.543162 1657011 logs.go:276] 1 containers: [b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67]
	I0520 11:27:05.543221 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.547142 1657011 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:27:05.547234 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:27:05.606997 1657011 cri.go:89] found id: "80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607"
	I0520 11:27:05.607065 1657011 cri.go:89] found id: ""
	I0520 11:27:05.607080 1657011 logs.go:276] 1 containers: [80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607]
	I0520 11:27:05.607144 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.610756 1657011 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:27:05.610852 1657011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:27:05.656696 1657011 cri.go:89] found id: "478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973"
	I0520 11:27:05.656718 1657011 cri.go:89] found id: ""
	I0520 11:27:05.656726 1657011 logs.go:276] 1 containers: [478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973]
	I0520 11:27:05.656785 1657011 ssh_runner.go:195] Run: which crictl
	I0520 11:27:05.660781 1657011 logs.go:123] Gathering logs for coredns [e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853] ...
	I0520 11:27:05.660806 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853"
	I0520 11:27:05.702546 1657011 logs.go:123] Gathering logs for kubernetes-dashboard [478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973] ...
	I0520 11:27:05.702574 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973"
	I0520 11:27:05.745356 1657011 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:27:05.745387 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:27:05.825181 1657011 logs.go:123] Gathering logs for kube-scheduler [200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d] ...
	I0520 11:27:05.825267 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d"
	I0520 11:27:05.870836 1657011 logs.go:123] Gathering logs for kindnet [b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67] ...
	I0520 11:27:05.870867 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67"
	I0520 11:27:05.915034 1657011 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:27:05.915072 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:27:06.107107 1657011 logs.go:123] Gathering logs for kube-apiserver [0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3] ...
	I0520 11:27:06.107145 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3"
	I0520 11:27:06.202380 1657011 logs.go:123] Gathering logs for kube-proxy [b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a] ...
	I0520 11:27:06.202416 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a"
	I0520 11:27:06.263898 1657011 logs.go:123] Gathering logs for kube-controller-manager [7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e] ...
	I0520 11:27:06.263929 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e"
	I0520 11:27:06.385051 1657011 logs.go:123] Gathering logs for container status ...
	I0520 11:27:06.385089 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:27:06.468248 1657011 logs.go:123] Gathering logs for kubelet ...
	I0520 11:27:06.468281 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 11:27:06.537324 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.809242     730 reflector.go:138] object-"kube-system"/"kube-proxy-token-dxbg4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-dxbg4" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.537613 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815246     730 reflector.go:138] object-"kube-system"/"metrics-server-token-lstwk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-lstwk" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.537878 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815362     730 reflector.go:138] object-"kube-system"/"kindnet-token-5d2mm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5d2mm" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.538107 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815419     730 reflector.go:138] object-"default"/"default-token-2c9fs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2c9fs" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.538331 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815480     730 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.538553 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815526     730 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.538781 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.815596     730 reflector.go:138] object-"kube-system"/"coredns-token-8nh95": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-8nh95" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.539091 1657011 logs.go:138] Found kubelet problem: May 20 11:21:26 old-k8s-version-776336 kubelet[730]: E0520 11:21:26.818042     730 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jt6tq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jt6tq" is forbidden: User "system:node:old-k8s-version-776336" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-776336' and this object
	W0520 11:27:06.551716 1657011 logs.go:138] Found kubelet problem: May 20 11:21:30 old-k8s-version-776336 kubelet[730]: E0520 11:21:30.748462     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:27:06.552134 1657011 logs.go:138] Found kubelet problem: May 20 11:21:31 old-k8s-version-776336 kubelet[730]: E0520 11:21:31.360081     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.554483 1657011 logs.go:138] Found kubelet problem: May 20 11:21:42 old-k8s-version-776336 kubelet[730]: E0520 11:21:42.354554     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:27:06.556998 1657011 logs.go:138] Found kubelet problem: May 20 11:21:55 old-k8s-version-776336 kubelet[730]: E0520 11:21:55.457689     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.557750 1657011 logs.go:138] Found kubelet problem: May 20 11:21:56 old-k8s-version-776336 kubelet[730]: E0520 11:21:56.460106     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.557970 1657011 logs.go:138] Found kubelet problem: May 20 11:21:57 old-k8s-version-776336 kubelet[730]: E0520 11:21:57.311677     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.558322 1657011 logs.go:138] Found kubelet problem: May 20 11:22:02 old-k8s-version-776336 kubelet[730]: E0520 11:22:02.482262     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.562435 1657011 logs.go:138] Found kubelet problem: May 20 11:22:08 old-k8s-version-776336 kubelet[730]: E0520 11:22:08.322783     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:27:06.563063 1657011 logs.go:138] Found kubelet problem: May 20 11:22:17 old-k8s-version-776336 kubelet[730]: E0520 11:22:17.507098     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.563422 1657011 logs.go:138] Found kubelet problem: May 20 11:22:22 old-k8s-version-776336 kubelet[730]: E0520 11:22:22.482798     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.563636 1657011 logs.go:138] Found kubelet problem: May 20 11:22:23 old-k8s-version-776336 kubelet[730]: E0520 11:22:23.311553     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.563979 1657011 logs.go:138] Found kubelet problem: May 20 11:22:36 old-k8s-version-776336 kubelet[730]: E0520 11:22:36.311591     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.564191 1657011 logs.go:138] Found kubelet problem: May 20 11:22:38 old-k8s-version-776336 kubelet[730]: E0520 11:22:38.312311     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.565819 1657011 logs.go:138] Found kubelet problem: May 20 11:22:47 old-k8s-version-776336 kubelet[730]: E0520 11:22:47.567324     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.566199 1657011 logs.go:138] Found kubelet problem: May 20 11:22:52 old-k8s-version-776336 kubelet[730]: E0520 11:22:52.482120     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.568339 1657011 logs.go:138] Found kubelet problem: May 20 11:22:53 old-k8s-version-776336 kubelet[730]: E0520 11:22:53.323112     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:27:06.568700 1657011 logs.go:138] Found kubelet problem: May 20 11:23:05 old-k8s-version-776336 kubelet[730]: E0520 11:23:05.311216     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.568889 1657011 logs.go:138] Found kubelet problem: May 20 11:23:05 old-k8s-version-776336 kubelet[730]: E0520 11:23:05.312235     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.569217 1657011 logs.go:138] Found kubelet problem: May 20 11:23:18 old-k8s-version-776336 kubelet[730]: E0520 11:23:18.311005     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.569422 1657011 logs.go:138] Found kubelet problem: May 20 11:23:19 old-k8s-version-776336 kubelet[730]: E0520 11:23:19.311931     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.569956 1657011 logs.go:138] Found kubelet problem: May 20 11:23:34 old-k8s-version-776336 kubelet[730]: E0520 11:23:34.312149     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.570314 1657011 logs.go:138] Found kubelet problem: May 20 11:23:34 old-k8s-version-776336 kubelet[730]: E0520 11:23:34.647790     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.570675 1657011 logs.go:138] Found kubelet problem: May 20 11:23:42 old-k8s-version-776336 kubelet[730]: E0520 11:23:42.482249     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.570884 1657011 logs.go:138] Found kubelet problem: May 20 11:23:48 old-k8s-version-776336 kubelet[730]: E0520 11:23:48.311989     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.571243 1657011 logs.go:138] Found kubelet problem: May 20 11:23:54 old-k8s-version-776336 kubelet[730]: E0520 11:23:54.311066     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.571465 1657011 logs.go:138] Found kubelet problem: May 20 11:24:02 old-k8s-version-776336 kubelet[730]: E0520 11:24:02.312243     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.571815 1657011 logs.go:138] Found kubelet problem: May 20 11:24:06 old-k8s-version-776336 kubelet[730]: E0520 11:24:06.313386     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.574161 1657011 logs.go:138] Found kubelet problem: May 20 11:24:17 old-k8s-version-776336 kubelet[730]: E0520 11:24:17.319931     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0520 11:27:06.574523 1657011 logs.go:138] Found kubelet problem: May 20 11:24:19 old-k8s-version-776336 kubelet[730]: E0520 11:24:19.310996     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.574733 1657011 logs.go:138] Found kubelet problem: May 20 11:24:28 old-k8s-version-776336 kubelet[730]: E0520 11:24:28.311548     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.575111 1657011 logs.go:138] Found kubelet problem: May 20 11:24:32 old-k8s-version-776336 kubelet[730]: E0520 11:24:32.310984     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.575465 1657011 logs.go:138] Found kubelet problem: May 20 11:24:43 old-k8s-version-776336 kubelet[730]: E0520 11:24:43.311302     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.575675 1657011 logs.go:138] Found kubelet problem: May 20 11:24:43 old-k8s-version-776336 kubelet[730]: E0520 11:24:43.312650     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.576077 1657011 logs.go:138] Found kubelet problem: May 20 11:24:58 old-k8s-version-776336 kubelet[730]: E0520 11:24:58.313801     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.576602 1657011 logs.go:138] Found kubelet problem: May 20 11:24:59 old-k8s-version-776336 kubelet[730]: E0520 11:24:59.771509     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.577160 1657011 logs.go:138] Found kubelet problem: May 20 11:25:02 old-k8s-version-776336 kubelet[730]: E0520 11:25:02.482351     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.577361 1657011 logs.go:138] Found kubelet problem: May 20 11:25:13 old-k8s-version-776336 kubelet[730]: E0520 11:25:13.311740     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.577700 1657011 logs.go:138] Found kubelet problem: May 20 11:25:14 old-k8s-version-776336 kubelet[730]: E0520 11:25:14.311041     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.578552 1657011 logs.go:138] Found kubelet problem: May 20 11:25:26 old-k8s-version-776336 kubelet[730]: E0520 11:25:26.311269     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.578740 1657011 logs.go:138] Found kubelet problem: May 20 11:25:28 old-k8s-version-776336 kubelet[730]: E0520 11:25:28.311449     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.579075 1657011 logs.go:138] Found kubelet problem: May 20 11:25:40 old-k8s-version-776336 kubelet[730]: E0520 11:25:40.311409     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.579263 1657011 logs.go:138] Found kubelet problem: May 20 11:25:40 old-k8s-version-776336 kubelet[730]: E0520 11:25:40.312759     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.579647 1657011 logs.go:138] Found kubelet problem: May 20 11:25:52 old-k8s-version-776336 kubelet[730]: E0520 11:25:52.311113     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.579896 1657011 logs.go:138] Found kubelet problem: May 20 11:25:53 old-k8s-version-776336 kubelet[730]: E0520 11:25:53.311487     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.580290 1657011 logs.go:138] Found kubelet problem: May 20 11:26:05 old-k8s-version-776336 kubelet[730]: E0520 11:26:05.311044     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.580502 1657011 logs.go:138] Found kubelet problem: May 20 11:26:07 old-k8s-version-776336 kubelet[730]: E0520 11:26:07.311454     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.581101 1657011 logs.go:138] Found kubelet problem: May 20 11:26:17 old-k8s-version-776336 kubelet[730]: E0520 11:26:17.312403     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.581310 1657011 logs.go:138] Found kubelet problem: May 20 11:26:22 old-k8s-version-776336 kubelet[730]: E0520 11:26:22.311955     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.581693 1657011 logs.go:138] Found kubelet problem: May 20 11:26:29 old-k8s-version-776336 kubelet[730]: E0520 11:26:29.310991     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.581904 1657011 logs.go:138] Found kubelet problem: May 20 11:26:35 old-k8s-version-776336 kubelet[730]: E0520 11:26:35.311614     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.582279 1657011 logs.go:138] Found kubelet problem: May 20 11:26:40 old-k8s-version-776336 kubelet[730]: E0520 11:26:40.310989     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.582688 1657011 logs.go:138] Found kubelet problem: May 20 11:26:46 old-k8s-version-776336 kubelet[730]: E0520 11:26:46.315105     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.583045 1657011 logs.go:138] Found kubelet problem: May 20 11:26:53 old-k8s-version-776336 kubelet[730]: E0520 11:26:53.310994     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.583283 1657011 logs.go:138] Found kubelet problem: May 20 11:26:57 old-k8s-version-776336 kubelet[730]: E0520 11:26:57.312314     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.583664 1657011 logs.go:138] Found kubelet problem: May 20 11:27:06 old-k8s-version-776336 kubelet[730]: E0520 11:27:06.311519     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	I0520 11:27:06.583679 1657011 logs.go:123] Gathering logs for dmesg ...
	I0520 11:27:06.583709 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:27:06.607566 1657011 logs.go:123] Gathering logs for etcd [a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4] ...
	I0520 11:27:06.607599 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4"
	I0520 11:27:06.704593 1657011 logs.go:123] Gathering logs for storage-provisioner [80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607] ...
	I0520 11:27:06.704637 1657011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607"
	I0520 11:27:06.755173 1657011 out.go:304] Setting ErrFile to fd 2...
	I0520 11:27:06.755199 1657011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 11:27:06.755246 1657011 out.go:239] X Problems detected in kubelet:
	W0520 11:27:06.755260 1657011 out.go:239]   May 20 11:26:40 old-k8s-version-776336 kubelet[730]: E0520 11:26:40.310989     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.755267 1657011 out.go:239]   May 20 11:26:46 old-k8s-version-776336 kubelet[730]: E0520 11:26:46.315105     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.755278 1657011 out.go:239]   May 20 11:26:53 old-k8s-version-776336 kubelet[730]: E0520 11:26:53.310994     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	W0520 11:27:06.755303 1657011 out.go:239]   May 20 11:26:57 old-k8s-version-776336 kubelet[730]: E0520 11:26:57.312314     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:27:06.755309 1657011 out.go:239]   May 20 11:27:06 old-k8s-version-776336 kubelet[730]: E0520 11:27:06.311519     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	I0520 11:27:06.755316 1657011 out.go:304] Setting ErrFile to fd 2...
	I0520 11:27:06.755327 1657011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:27:06.293052 1665508 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-746238:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.887805558s)
	I0520 11:27:06.293083 1665508 kic.go:203] duration metric: took 4.887966548s to extract preloaded images to volume ...
	W0520 11:27:06.293234 1665508 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0520 11:27:06.293349 1665508 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0520 11:27:06.371689 1665508 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-746238 --name embed-certs-746238 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-746238 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-746238 --network embed-certs-746238 --ip 192.168.85.2 --volume embed-certs-746238:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0520 11:27:06.811574 1665508 cli_runner.go:164] Run: docker container inspect embed-certs-746238 --format={{.State.Running}}
	I0520 11:27:06.829235 1665508 cli_runner.go:164] Run: docker container inspect embed-certs-746238 --format={{.State.Status}}
	I0520 11:27:06.848881 1665508 cli_runner.go:164] Run: docker exec embed-certs-746238 stat /var/lib/dpkg/alternatives/iptables
	I0520 11:27:06.928492 1665508 oci.go:144] the created container "embed-certs-746238" has a running status.
	I0520 11:27:06.928537 1665508 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/embed-certs-746238/id_rsa...
	I0520 11:27:07.956162 1665508 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/embed-certs-746238/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0520 11:27:07.976291 1665508 cli_runner.go:164] Run: docker container inspect embed-certs-746238 --format={{.State.Status}}
	I0520 11:27:07.994228 1665508 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0520 11:27:07.994258 1665508 kic_runner.go:114] Args: [docker exec --privileged embed-certs-746238 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0520 11:27:08.051376 1665508 cli_runner.go:164] Run: docker container inspect embed-certs-746238 --format={{.State.Status}}
	I0520 11:27:08.075007 1665508 machine.go:94] provisionDockerMachine start ...
	I0520 11:27:08.075133 1665508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-746238
	I0520 11:27:08.099543 1665508 main.go:141] libmachine: Using SSH client type: native
	I0520 11:27:08.099913 1665508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40797 <nil> <nil>}
	I0520 11:27:08.099928 1665508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:27:08.233300 1665508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-746238
	
	I0520 11:27:08.233373 1665508 ubuntu.go:169] provisioning hostname "embed-certs-746238"
	I0520 11:27:08.233479 1665508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-746238
	I0520 11:27:08.253156 1665508 main.go:141] libmachine: Using SSH client type: native
	I0520 11:27:08.253419 1665508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40797 <nil> <nil>}
	I0520 11:27:08.253432 1665508 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-746238 && echo "embed-certs-746238" | sudo tee /etc/hostname
	I0520 11:27:08.399432 1665508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-746238
	
	I0520 11:27:08.399512 1665508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-746238
	I0520 11:27:08.416598 1665508 main.go:141] libmachine: Using SSH client type: native
	I0520 11:27:08.416859 1665508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40797 <nil> <nil>}
	I0520 11:27:08.416878 1665508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-746238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-746238/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-746238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:27:08.543293 1665508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:27:08.543374 1665508 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18925-1463640/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-1463640/.minikube}
	I0520 11:27:08.543446 1665508 ubuntu.go:177] setting up certificates
	I0520 11:27:08.543490 1665508 provision.go:84] configureAuth start
	I0520 11:27:08.543587 1665508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-746238
	I0520 11:27:08.561331 1665508 provision.go:143] copyHostCerts
	I0520 11:27:08.561400 1665508 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.pem, removing ...
	I0520 11:27:08.561410 1665508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.pem
	I0520 11:27:08.561490 1665508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.pem (1082 bytes)
	I0520 11:27:08.561582 1665508 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-1463640/.minikube/cert.pem, removing ...
	I0520 11:27:08.561587 1665508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-1463640/.minikube/cert.pem
	I0520 11:27:08.561612 1665508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/cert.pem (1123 bytes)
	I0520 11:27:08.561818 1665508 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-1463640/.minikube/key.pem, removing ...
	I0520 11:27:08.561834 1665508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-1463640/.minikube/key.pem
	I0520 11:27:08.561878 1665508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-1463640/.minikube/key.pem (1679 bytes)
	I0520 11:27:08.561945 1665508 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem org=jenkins.embed-certs-746238 san=[127.0.0.1 192.168.85.2 embed-certs-746238 localhost minikube]
	I0520 11:27:08.936633 1665508 provision.go:177] copyRemoteCerts
	I0520 11:27:08.936730 1665508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:27:08.936799 1665508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-746238
	I0520 11:27:08.955145 1665508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40797 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/embed-certs-746238/id_rsa Username:docker}
	I0520 11:27:09.051398 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 11:27:09.078264 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 11:27:09.104725 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:27:09.130368 1665508 provision.go:87] duration metric: took 586.849403ms to configureAuth
	I0520 11:27:09.130396 1665508 ubuntu.go:193] setting minikube options for container-runtime
	I0520 11:27:09.130625 1665508 config.go:182] Loaded profile config "embed-certs-746238": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:27:09.130767 1665508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-746238
	I0520 11:27:09.155275 1665508 main.go:141] libmachine: Using SSH client type: native
	I0520 11:27:09.155513 1665508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 40797 <nil> <nil>}
	I0520 11:27:09.155528 1665508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:27:09.416964 1665508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:27:09.416996 1665508 machine.go:97] duration metric: took 1.341968279s to provisionDockerMachine
	I0520 11:27:09.417007 1665508 client.go:171] duration metric: took 8.848661468s to LocalClient.Create
	I0520 11:27:09.417024 1665508 start.go:167] duration metric: took 8.848727215s to libmachine.API.Create "embed-certs-746238"
	I0520 11:27:09.417033 1665508 start.go:293] postStartSetup for "embed-certs-746238" (driver="docker")
	I0520 11:27:09.417046 1665508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:27:09.417116 1665508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:27:09.417166 1665508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-746238
	I0520 11:27:09.432954 1665508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40797 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/embed-certs-746238/id_rsa Username:docker}
	I0520 11:27:09.528268 1665508 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:27:09.532125 1665508 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0520 11:27:09.532164 1665508 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0520 11:27:09.532182 1665508 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0520 11:27:09.532194 1665508 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0520 11:27:09.532211 1665508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-1463640/.minikube/addons for local assets ...
	I0520 11:27:09.532268 1665508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-1463640/.minikube/files for local assets ...
	I0520 11:27:09.532367 1665508 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-1463640/.minikube/files/etc/ssl/certs/14690782.pem -> 14690782.pem in /etc/ssl/certs
	I0520 11:27:09.532496 1665508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:27:09.543199 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/files/etc/ssl/certs/14690782.pem --> /etc/ssl/certs/14690782.pem (1708 bytes)
	I0520 11:27:09.577373 1665508 start.go:296] duration metric: took 160.320304ms for postStartSetup
	I0520 11:27:09.577944 1665508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-746238
	I0520 11:27:09.600428 1665508 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/config.json ...
	I0520 11:27:09.600778 1665508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 11:27:09.600832 1665508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-746238
	I0520 11:27:09.620305 1665508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40797 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/embed-certs-746238/id_rsa Username:docker}
	I0520 11:27:09.710678 1665508 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0520 11:27:09.715460 1665508 start.go:128] duration metric: took 9.150596669s to createHost
	I0520 11:27:09.715487 1665508 start.go:83] releasing machines lock for "embed-certs-746238", held for 9.150751356s
	I0520 11:27:09.715581 1665508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-746238
	I0520 11:27:09.731791 1665508 ssh_runner.go:195] Run: cat /version.json
	I0520 11:27:09.731853 1665508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-746238
	I0520 11:27:09.732100 1665508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:27:09.732143 1665508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-746238
	I0520 11:27:09.752141 1665508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40797 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/embed-certs-746238/id_rsa Username:docker}
	I0520 11:27:09.759892 1665508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40797 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/embed-certs-746238/id_rsa Username:docker}
	I0520 11:27:09.846777 1665508 ssh_runner.go:195] Run: systemctl --version
	I0520 11:27:09.959580 1665508 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:27:10.106065 1665508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 11:27:10.111356 1665508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:27:10.134996 1665508 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0520 11:27:10.135124 1665508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:27:10.170673 1665508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0520 11:27:10.170700 1665508 start.go:494] detecting cgroup driver to use...
	I0520 11:27:10.170733 1665508 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0520 11:27:10.170783 1665508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:27:10.189841 1665508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:27:10.203756 1665508 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:27:10.203827 1665508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:27:10.218481 1665508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:27:10.237242 1665508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:27:10.330397 1665508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:27:10.438865 1665508 docker.go:233] disabling docker service ...
	I0520 11:27:10.438985 1665508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:27:10.462205 1665508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:27:10.475142 1665508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:27:10.562501 1665508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:27:10.660982 1665508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:27:10.674150 1665508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:27:10.691611 1665508 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:27:10.691709 1665508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:27:10.702249 1665508 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:27:10.702372 1665508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:27:10.713132 1665508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:27:10.724877 1665508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:27:10.735267 1665508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:27:10.745751 1665508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:27:10.756344 1665508 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:27:10.777391 1665508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:27:10.792747 1665508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:27:10.803771 1665508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:27:10.814203 1665508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:27:10.899138 1665508 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:27:11.023537 1665508 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:27:11.023673 1665508 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:27:11.027637 1665508 start.go:562] Will wait 60s for crictl version
	I0520 11:27:11.027705 1665508 ssh_runner.go:195] Run: which crictl
	I0520 11:27:11.031338 1665508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:27:11.077326 1665508 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0520 11:27:11.077454 1665508 ssh_runner.go:195] Run: crio --version
	I0520 11:27:11.119646 1665508 ssh_runner.go:195] Run: crio --version
	I0520 11:27:11.164467 1665508 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.24.6 ...
	I0520 11:27:11.166173 1665508 cli_runner.go:164] Run: docker network inspect embed-certs-746238 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0520 11:27:11.180141 1665508 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0520 11:27:11.184136 1665508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:27:11.197103 1665508 kubeadm.go:877] updating cluster {Name:embed-certs-746238 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-746238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:27:11.197221 1665508 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:27:11.197280 1665508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:27:11.280961 1665508 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:27:11.280984 1665508 crio.go:433] Images already preloaded, skipping extraction
	I0520 11:27:11.281044 1665508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:27:11.322476 1665508 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:27:11.322539 1665508 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:27:11.322554 1665508 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.30.1 crio true true} ...
	I0520 11:27:11.322658 1665508 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-746238 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-746238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:27:11.322747 1665508 ssh_runner.go:195] Run: crio config
	I0520 11:27:11.375668 1665508 cni.go:84] Creating CNI manager for ""
	I0520 11:27:11.375740 1665508 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 11:27:11.375757 1665508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:27:11.375783 1665508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-746238 NodeName:embed-certs-746238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:27:11.375946 1665508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-746238"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:27:11.376044 1665508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:27:11.385496 1665508 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:27:11.385634 1665508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:27:11.394884 1665508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0520 11:27:11.415827 1665508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:27:11.434168 1665508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0520 11:27:11.453979 1665508 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0520 11:27:11.457573 1665508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:27:11.468519 1665508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:27:11.555348 1665508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:27:11.570584 1665508 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238 for IP: 192.168.85.2
	I0520 11:27:11.570607 1665508 certs.go:194] generating shared ca certs ...
	I0520 11:27:11.570624 1665508 certs.go:226] acquiring lock for ca certs: {Name:mke113fbac30e255083f63bab9dafb629ead7667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:27:11.570762 1665508 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key
	I0520 11:27:11.570811 1665508 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key
	I0520 11:27:11.570822 1665508 certs.go:256] generating profile certs ...
	I0520 11:27:11.570880 1665508 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/client.key
	I0520 11:27:11.570906 1665508 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/client.crt with IP's: []
	I0520 11:27:11.796196 1665508 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/client.crt ...
	I0520 11:27:11.796225 1665508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/client.crt: {Name:mke2ede68d639e3e144c8b9095c2e663df86e76f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:27:11.796429 1665508 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/client.key ...
	I0520 11:27:11.796446 1665508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/client.key: {Name:mk33815f5a85ebdae382a81160462c34db373791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:27:11.796545 1665508 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.key.b7af1d52
	I0520 11:27:11.796565 1665508 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.crt.b7af1d52 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0520 11:27:12.638514 1665508 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.crt.b7af1d52 ...
	I0520 11:27:12.638554 1665508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.crt.b7af1d52: {Name:mk1f59793193340f3de3b32ee110b4ec37cf4167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:27:12.639776 1665508 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.key.b7af1d52 ...
	I0520 11:27:12.639800 1665508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.key.b7af1d52: {Name:mkc9ec541dd62114a15c3122f3bb8baddd454da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:27:12.640318 1665508 certs.go:381] copying /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.crt.b7af1d52 -> /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.crt
	I0520 11:27:12.640410 1665508 certs.go:385] copying /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.key.b7af1d52 -> /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.key
	I0520 11:27:12.640474 1665508 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/proxy-client.key
	I0520 11:27:12.640494 1665508 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/proxy-client.crt with IP's: []
	I0520 11:27:13.154256 1665508 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/proxy-client.crt ...
	I0520 11:27:13.154291 1665508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/proxy-client.crt: {Name:mk660937c217e737b17b6975257d16b8b7b0d306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:27:13.155151 1665508 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/proxy-client.key ...
	I0520 11:27:13.155178 1665508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/proxy-client.key: {Name:mkb7ed79e514a4a4bde816774e6ff3e761b782ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:27:13.155921 1665508 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/1469078.pem (1338 bytes)
	W0520 11:27:13.155974 1665508 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/1469078_empty.pem, impossibly tiny 0 bytes
	I0520 11:27:13.155985 1665508 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 11:27:13.156012 1665508 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/ca.pem (1082 bytes)
	I0520 11:27:13.156039 1665508 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:27:13.156061 1665508 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/key.pem (1679 bytes)
	I0520 11:27:13.156107 1665508 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-1463640/.minikube/files/etc/ssl/certs/14690782.pem (1708 bytes)
	I0520 11:27:13.156743 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:27:13.187895 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:27:13.214484 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:27:13.245487 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 11:27:13.271348 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0520 11:27:13.297443 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:27:13.323123 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:27:13.348327 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/embed-certs-746238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 11:27:13.374628 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:27:13.403752 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/certs/1469078.pem --> /usr/share/ca-certificates/1469078.pem (1338 bytes)
	I0520 11:27:13.428543 1665508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-1463640/.minikube/files/etc/ssl/certs/14690782.pem --> /usr/share/ca-certificates/14690782.pem (1708 bytes)
	I0520 11:27:13.453033 1665508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:27:13.472987 1665508 ssh_runner.go:195] Run: openssl version
	I0520 11:27:13.478696 1665508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:27:13.488267 1665508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:27:13.491768 1665508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:27:13.491880 1665508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:27:13.498826 1665508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:27:13.508338 1665508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1469078.pem && ln -fs /usr/share/ca-certificates/1469078.pem /etc/ssl/certs/1469078.pem"
	I0520 11:27:13.517833 1665508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1469078.pem
	I0520 11:27:13.521435 1665508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:36 /usr/share/ca-certificates/1469078.pem
	I0520 11:27:13.521540 1665508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1469078.pem
	I0520 11:27:13.528818 1665508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1469078.pem /etc/ssl/certs/51391683.0"
	I0520 11:27:13.538327 1665508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14690782.pem && ln -fs /usr/share/ca-certificates/14690782.pem /etc/ssl/certs/14690782.pem"
	I0520 11:27:13.547784 1665508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14690782.pem
	I0520 11:27:13.551853 1665508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:36 /usr/share/ca-certificates/14690782.pem
	I0520 11:27:13.551966 1665508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14690782.pem
	I0520 11:27:13.558891 1665508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14690782.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:27:13.569102 1665508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:27:13.572446 1665508 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 11:27:13.572536 1665508 kubeadm.go:391] StartCluster: {Name:embed-certs-746238 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-746238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:27:13.572651 1665508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:27:13.572726 1665508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:27:13.613967 1665508 cri.go:89] found id: ""
	I0520 11:27:13.614057 1665508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 11:27:13.623199 1665508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:27:13.632286 1665508 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0520 11:27:13.632409 1665508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:27:13.641379 1665508 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:27:13.641443 1665508 kubeadm.go:156] found existing configuration files:
	
	I0520 11:27:13.641517 1665508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:27:13.652097 1665508 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:27:13.652186 1665508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:27:13.661060 1665508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:27:13.670488 1665508 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:27:13.670558 1665508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:27:13.679142 1665508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:27:13.688404 1665508 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:27:13.688525 1665508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:27:13.697799 1665508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:27:13.707003 1665508 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:27:13.707079 1665508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:27:13.715880 1665508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0520 11:27:13.825528 1665508 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1058-aws\n", err: exit status 1
	I0520 11:27:13.896087 1665508 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:27:16.756505 1657011 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0520 11:27:16.769614 1657011 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0520 11:27:16.771733 1657011 out.go:177] 
	W0520 11:27:16.773260 1657011 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0520 11:27:16.773298 1657011 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0520 11:27:16.773326 1657011 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0520 11:27:16.773332 1657011 out.go:239] * 
	W0520 11:27:16.775071 1657011 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:27:16.777544 1657011 out.go:177] 
	
	
	==> CRI-O <==
	May 20 11:24:58 old-k8s-version-776336 crio[621]: time="2024-05-20 11:24:58.792262583Z" level=info msg="Removed container 439ff54dab0c5f40db544109ff278f18e63f26bfa5b570d15c2d725d7394fafe: kubernetes-dashboard/dashboard-metrics-scraper-8d5bb5db8-tzcxg/dashboard-metrics-scraper" id=b778c3f8-b12e-4948-b67e-ab41603d37e2 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	May 20 11:25:13 old-k8s-version-776336 crio[621]: time="2024-05-20 11:25:13.311030947Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5e6e6bfb-c538-468d-a1b0-99b19da95357 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:25:13 old-k8s-version-776336 crio[621]: time="2024-05-20 11:25:13.311284808Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5e6e6bfb-c538-468d-a1b0-99b19da95357 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:25:28 old-k8s-version-776336 crio[621]: time="2024-05-20 11:25:28.310928300Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0d11a2db-891a-493f-8f28-ad65ef3dec6a name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:25:28 old-k8s-version-776336 crio[621]: time="2024-05-20 11:25:28.311209016Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0d11a2db-891a-493f-8f28-ad65ef3dec6a name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:25:40 old-k8s-version-776336 crio[621]: time="2024-05-20 11:25:40.311807843Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=410e5966-9212-4bd5-9aef-f327d26a26f5 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:25:40 old-k8s-version-776336 crio[621]: time="2024-05-20 11:25:40.312344577Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=410e5966-9212-4bd5-9aef-f327d26a26f5 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:25:53 old-k8s-version-776336 crio[621]: time="2024-05-20 11:25:53.310991771Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f17fc58a-b8e3-4226-9be1-ecf679c769c1 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:25:53 old-k8s-version-776336 crio[621]: time="2024-05-20 11:25:53.311244254Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f17fc58a-b8e3-4226-9be1-ecf679c769c1 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:07 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:07.310965883Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ebeed1c9-bde8-4ef0-9be5-ac3ad8d2e577 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:07 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:07.311211719Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ebeed1c9-bde8-4ef0-9be5-ac3ad8d2e577 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:16 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:16.297246979Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=860eea10-a036-43c9-8098-86ad2c1e0385 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:16 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:16.297586008Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:489397,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=860eea10-a036-43c9-8098-86ad2c1e0385 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:22 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:22.311178422Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=cd4a08ba-e376-4ddb-99a2-18c128f3519f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:22 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:22.311480922Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=cd4a08ba-e376-4ddb-99a2-18c128f3519f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:35 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:35.311053224Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=07f80e4c-0007-4099-8ca9-698cf81a32ba name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:35 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:35.311398036Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=07f80e4c-0007-4099-8ca9-698cf81a32ba name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:46 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:46.314304768Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=451a1125-d79b-48c1-b2fb-e59a063595a7 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:46 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:46.314538559Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=451a1125-d79b-48c1-b2fb-e59a063595a7 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:57 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:57.311648951Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=817c0442-8200-4590-b0e6-a3663feee710 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:26:57 old-k8s-version-776336 crio[621]: time="2024-05-20 11:26:57.311974416Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=817c0442-8200-4590-b0e6-a3663feee710 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:27:08 old-k8s-version-776336 crio[621]: time="2024-05-20 11:27:08.311159790Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f5bee100-d7d9-4a58-94a8-65cd3ad1d15f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:27:08 old-k8s-version-776336 crio[621]: time="2024-05-20 11:27:08.311403239Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f5bee100-d7d9-4a58-94a8-65cd3ad1d15f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 20 11:27:08 old-k8s-version-776336 crio[621]: time="2024-05-20 11:27:08.314721695Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=6a2ee8e6-7630-4b6d-a71f-a36f461035fd name=/runtime.v1alpha2.ImageService/PullImage
	May 20 11:27:08 old-k8s-version-776336 crio[621]: time="2024-05-20 11:27:08.328256216Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	9880d6ea4fbcf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 minutes ago       Exited              dashboard-metrics-scraper   5                   1a2d3d31c4fff       dashboard-metrics-scraper-8d5bb5db8-tzcxg
	478b985a70c80       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   5 minutes ago       Running             kubernetes-dashboard        0                   4478b401b05c7       kubernetes-dashboard-cd95d586-cvhs4
	80aac8daed1fe       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           5 minutes ago       Running             storage-provisioner         0                   dc573c5810f1c       storage-provisioner
	b026cb8ba53f0       25a5233254979d0678a2db1d15b76b73dc380d81bc5eed93916ba5638b3cd894                                           5 minutes ago       Running             kube-proxy                  0                   354392dabb7db       kube-proxy-f5jcm
	b92863fb0bcbb       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                           5 minutes ago       Running             kindnet-cni                 0                   afa1f64103951       kindnet-qzxjc
	e346471df6217       db91994f4ee8f894a1e8a6c1a76f615da8fc3c019300a3686291ce6fcbc57895                                           5 minutes ago       Running             coredns                     0                   60c396b3b4da1       coredns-74ff55c5b-grqv6
	e3f2c9e695265       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           5 minutes ago       Running             busybox                     0                   620bb8efc1603       busybox
	0b14a0f156605       2c08bbbc02d3aa5dfbf4e79f15c0a61424049288917aa10364464ca1f7de7157                                           6 minutes ago       Running             kube-apiserver              0                   74a2830e21f5b       kube-apiserver-old-k8s-version-776336
	7f0c6c3acf70d       1df8a2b116bd16f7070fd383a6769c8d644b365575e8ffa3e492b84e4f05fc74                                           6 minutes ago       Running             kube-controller-manager     0                   553091a24ec94       kube-controller-manager-old-k8s-version-776336
	200301c79d92c       e7605f88f17d6a4c3f083ef9c6f5f19b39f87e4d4406a05a8612b54a6ea57051                                           6 minutes ago       Running             kube-scheduler              0                   b70fb0f2d6bf4       kube-scheduler-old-k8s-version-776336
	a0c29f0d3e719       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28                                           6 minutes ago       Running             etcd                        0                   c47259a3b073f       etcd-old-k8s-version-776336
	
	
	==> coredns [e346471df6217fe773e2ef8f7b75f5b46cd71d6eb9cbc805448692e543b15853] <==
	I0520 11:21:58.977857       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-20 11:21:28.976607272 +0000 UTC m=+0.025190857) (total time: 30.00113921s):
	Trace[2019727887]: [30.00113921s] [30.00113921s] END
	E0520 11:21:58.978440       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0520 11:21:58.978735       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-20 11:21:28.978392667 +0000 UTC m=+0.026976261) (total time: 30.000325378s):
	Trace[939984059]: [30.000325378s] [30.000325378s] END
	E0520 11:21:58.978789       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0520 11:21:58.979208       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-20 11:21:28.97877094 +0000 UTC m=+0.027354525) (total time: 30.000422377s):
	Trace[911902081]: [30.000422377s] [30.000422377s] END
	E0520 11:21:58.979220       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:48798 - 18267 "HINFO IN 6774303480060935370.502628150519182392. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.035889633s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:39083 - 13419 "HINFO IN 2801442747761159507.7524682479264674889. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012610283s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-776336
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-776336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=old-k8s-version-776336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_18_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:18:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-776336
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:27:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:22:17 +0000   Mon, 20 May 2024 11:18:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:22:17 +0000   Mon, 20 May 2024 11:18:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:22:17 +0000   Mon, 20 May 2024 11:18:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:22:17 +0000   Mon, 20 May 2024 11:19:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-776336
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 22cf92f476e446f680c31b2e6946156e
	  System UUID:                edf7f3a3-32e9-4e2e-a986-fcdc41cc3408
	  Boot ID:                    df9684e8-d429-41b3-8a9f-ef96b9c9133b
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 coredns-74ff55c5b-grqv6                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m38s
	  kube-system                 etcd-old-k8s-version-776336                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m46s
	  kube-system                 kindnet-qzxjc                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m38s
	  kube-system                 kube-apiserver-old-k8s-version-776336             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m46s
	  kube-system                 kube-controller-manager-old-k8s-version-776336    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m46s
	  kube-system                 kube-proxy-f5jcm                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-scheduler-old-k8s-version-776336             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m46s
	  kube-system                 metrics-server-9975d5f86-qdks4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m31s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-tzcxg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-cvhs4               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  9m6s (x5 over 9m6s)  kubelet     Node old-k8s-version-776336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m6s (x5 over 9m6s)  kubelet     Node old-k8s-version-776336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m6s (x5 over 9m6s)  kubelet     Node old-k8s-version-776336 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m46s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m46s                kubelet     Node old-k8s-version-776336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m46s                kubelet     Node old-k8s-version-776336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m46s                kubelet     Node old-k8s-version-776336 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m36s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m56s                kubelet     Node old-k8s-version-776336 status is now: NodeReady
	  Normal  Starting                 6m2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m2s (x8 over 6m2s)  kubelet     Node old-k8s-version-776336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s (x8 over 6m2s)  kubelet     Node old-k8s-version-776336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s (x8 over 6m2s)  kubelet     Node old-k8s-version-776336 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m49s                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000732] FS-Cache: N-cookie c=00000186 [p=0000017d fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=00000000b52af1ec
	[  +0.001072] FS-Cache: N-key=[8] '86843b0000000000'
	[  +0.003288] FS-Cache: Duplicate cookie detected
	[  +0.000779] FS-Cache: O-cookie c=00000180 [p=0000017d fl=226 nc=0 na=1]
	[  +0.001126] FS-Cache: O-cookie d=00000000865130fc{9p.inode} n=00000000e8d17cb1
	[  +0.001082] FS-Cache: O-key=[8] '86843b0000000000'
	[  +0.000756] FS-Cache: N-cookie c=00000187 [p=0000017d fl=2 nc=0 na=1]
	[  +0.000963] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=00000000ed12de27
	[  +0.001175] FS-Cache: N-key=[8] '86843b0000000000'
	[  +2.193050] FS-Cache: Duplicate cookie detected
	[  +0.000790] FS-Cache: O-cookie c=0000017e [p=0000017d fl=226 nc=0 na=1]
	[  +0.001021] FS-Cache: O-cookie d=00000000865130fc{9p.inode} n=00000000edfd6497
	[  +0.001178] FS-Cache: O-key=[8] '85843b0000000000'
	[  +0.000743] FS-Cache: N-cookie c=00000189 [p=0000017d fl=2 nc=0 na=1]
	[  +0.001027] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=00000000b52af1ec
	[  +0.001079] FS-Cache: N-key=[8] '85843b0000000000'
	[  +0.264933] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=00000183 [p=0000017d fl=226 nc=0 na=1]
	[  +0.000988] FS-Cache: O-cookie d=00000000865130fc{9p.inode} n=0000000017488ba3
	[  +0.001098] FS-Cache: O-key=[8] '8b843b0000000000'
	[  +0.000762] FS-Cache: N-cookie c=0000018a [p=0000017d fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000865130fc{9p.inode} n=0000000005ec9b19
	[  +0.001142] FS-Cache: N-key=[8] '8b843b0000000000'
	[May20 11:12] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [a0c29f0d3e719d5fdc0931e9109cfae19b3167c2b4e839f25c7b1b63784a02a4] <==
	2024-05-20 11:23:18.184809 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:23:28.184697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:23:38.184560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:23:48.184658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:23:58.184803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:24:08.184617 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:24:18.184659 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:24:28.185551 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:24:38.184763 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:24:48.184802 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:24:58.184541 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:25:08.184774 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:25:18.184710 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:25:28.184616 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:25:38.184710 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:25:48.184590 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:25:58.184567 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:26:08.184631 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:26:18.184595 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:26:28.184606 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:26:38.184603 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:26:48.184734 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:26:58.184663 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:27:08.184582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:27:18.190216 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:27:18 up 1 day, 19:09,  0 users,  load average: 1.20, 1.71, 2.24
	Linux old-k8s-version-776336 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b92863fb0bcbb6bb4f4fb5c32ac048568d8bea72be8749218a1c81ceb8dcbf67] <==
	I0520 11:25:10.643327       1 main.go:227] handling current node
	I0520 11:25:20.659020       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:25:20.659056       1 main.go:227] handling current node
	I0520 11:25:30.669286       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:25:30.669312       1 main.go:227] handling current node
	I0520 11:25:40.689754       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:25:40.689787       1 main.go:227] handling current node
	I0520 11:25:50.707885       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:25:50.707915       1 main.go:227] handling current node
	I0520 11:26:00.717972       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:26:00.718000       1 main.go:227] handling current node
	I0520 11:26:10.734252       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:26:10.734282       1 main.go:227] handling current node
	I0520 11:26:20.744192       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:26:20.744348       1 main.go:227] handling current node
	I0520 11:26:30.755108       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:26:30.755139       1 main.go:227] handling current node
	I0520 11:26:40.773506       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:26:40.773532       1 main.go:227] handling current node
	I0520 11:26:50.788259       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:26:50.788288       1 main.go:227] handling current node
	I0520 11:27:00.798430       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:27:00.798461       1 main.go:227] handling current node
	I0520 11:27:10.809329       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0520 11:27:10.809464       1 main.go:227] handling current node
	
	
	==> kube-apiserver [0b14a0f156605ce747bbd3ec92b34eb3ecf92d995ef08cfe0d147e8a56eb0eb3] <==
	I0520 11:23:49.919591       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:23:49.919600       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0520 11:24:29.100283       1 client.go:360] parsed scheme: "passthrough"
	I0520 11:24:29.100325       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:24:29.100334       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0520 11:24:29.718154       1 handler_proxy.go:102] no RequestInfo found in the context
	E0520 11:24:29.718234       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:24:29.718245       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0520 11:25:05.356335       1 client.go:360] parsed scheme: "passthrough"
	I0520 11:25:05.356378       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:25:05.356387       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0520 11:25:42.954340       1 client.go:360] parsed scheme: "passthrough"
	I0520 11:25:42.954389       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:25:42.954398       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0520 11:26:23.252746       1 client.go:360] parsed scheme: "passthrough"
	I0520 11:26:23.252795       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:26:23.252804       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0520 11:26:27.786076       1 handler_proxy.go:102] no RequestInfo found in the context
	E0520 11:26:27.786149       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:26:27.786158       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0520 11:26:58.788963       1 client.go:360] parsed scheme: "passthrough"
	I0520 11:26:58.788999       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:26:58.789007       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [7f0c6c3acf70df9c58574fae04af747df228563cb3da7252a8324c5242528d4e] <==
	W0520 11:22:51.630783       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:23:16.029968       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:23:23.281175       1 request.go:655] Throttling request took 1.048536809s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0520 11:23:24.132740       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:23:46.531812       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:23:55.783189       1 request.go:655] Throttling request took 1.048429065s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0520 11:23:56.634746       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:24:17.034007       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:24:28.284246       1 request.go:655] Throttling request took 1.04844712s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0520 11:24:29.135726       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:24:47.535940       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:25:00.786143       1 request.go:655] Throttling request took 1.048254774s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0520 11:25:01.637945       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:25:18.038320       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:25:33.288393       1 request.go:655] Throttling request took 1.048562495s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0520 11:25:34.139828       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:25:48.540187       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:26:05.790250       1 request.go:655] Throttling request took 1.048401944s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0520 11:26:06.641800       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:26:19.095119       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:26:38.292215       1 request.go:655] Throttling request took 1.048575177s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0520 11:26:39.143672       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:26:49.597125       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:27:10.794333       1 request.go:655] Throttling request took 1.048715156s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0520 11:27:11.654436       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [b026cb8ba53f0d1f48335254f05077b2a31ee7e5a35af6a7b530c5f18dccae5a] <==
	I0520 11:18:42.297980       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0520 11:18:42.298110       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0520 11:18:42.309389       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0520 11:18:42.309515       1 server_others.go:185] Using iptables Proxier.
	I0520 11:18:42.309842       1 server.go:650] Version: v1.20.0
	I0520 11:18:42.310557       1 config.go:315] Starting service config controller
	I0520 11:18:42.310588       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0520 11:18:42.310620       1 config.go:224] Starting endpoint slice config controller
	I0520 11:18:42.310631       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0520 11:18:42.410729       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0520 11:18:42.410729       1 shared_informer.go:247] Caches are synced for service config 
	I0520 11:21:29.661422       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0520 11:21:29.661753       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0520 11:21:29.672977       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0520 11:21:29.673147       1 server_others.go:185] Using iptables Proxier.
	I0520 11:21:29.673408       1 server.go:650] Version: v1.20.0
	I0520 11:21:29.674223       1 config.go:315] Starting service config controller
	I0520 11:21:29.674297       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0520 11:21:29.674396       1 config.go:224] Starting endpoint slice config controller
	I0520 11:21:29.674427       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0520 11:21:29.774487       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0520 11:21:29.774633       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [200301c79d92cc1dca51aa9d3818041786dbc0fc25722073be3c056ab8ddc40d] <==
	E0520 11:18:20.653827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 11:18:20.653974       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 11:18:20.654087       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:18:20.654235       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 11:18:20.654427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 11:18:20.654985       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 11:18:20.655323       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:18:20.655439       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:18:20.657919       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 11:18:21.578939       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 11:18:21.581316       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 11:18:21.592150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:18:21.599085       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:18:21.646082       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0520 11:18:22.126500       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0520 11:21:20.435133       1 serving.go:331] Generated self-signed cert in-memory
	W0520 11:21:26.673930       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 11:21:26.674095       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:21:26.674111       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 11:21:26.674117       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 11:21:26.951990       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0520 11:21:26.952143       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:21:26.952159       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:21:26.952174       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0520 11:21:27.155951       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	May 20 11:25:53 old-k8s-version-776336 kubelet[730]: E0520 11:25:53.311487     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:26:05 old-k8s-version-776336 kubelet[730]: I0520 11:26:05.310668     730 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9880d6ea4fbcfba04698cb8c4772564d24fc633aa79e6b35ccba51d3677dbb30
	May 20 11:26:05 old-k8s-version-776336 kubelet[730]: E0520 11:26:05.311044     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	May 20 11:26:07 old-k8s-version-776336 kubelet[730]: E0520 11:26:07.311454     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:26:16 old-k8s-version-776336 kubelet[730]: E0520 11:26:16.299580     730 container_manager_linux.go:533] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/79b3e893485d03f6c7fa54c0f1076a4e9f6fac7238dffad483d3ab0503f5a33a, memory: /docker/79b3e893485d03f6c7fa54c0f1076a4e9f6fac7238dffad483d3ab0503f5a33a/system.slice/kubelet.service
	May 20 11:26:17 old-k8s-version-776336 kubelet[730]: I0520 11:26:17.311320     730 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9880d6ea4fbcfba04698cb8c4772564d24fc633aa79e6b35ccba51d3677dbb30
	May 20 11:26:17 old-k8s-version-776336 kubelet[730]: E0520 11:26:17.312403     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	May 20 11:26:22 old-k8s-version-776336 kubelet[730]: E0520 11:26:22.311955     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:26:29 old-k8s-version-776336 kubelet[730]: I0520 11:26:29.310672     730 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9880d6ea4fbcfba04698cb8c4772564d24fc633aa79e6b35ccba51d3677dbb30
	May 20 11:26:29 old-k8s-version-776336 kubelet[730]: E0520 11:26:29.310991     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	May 20 11:26:35 old-k8s-version-776336 kubelet[730]: E0520 11:26:35.311614     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:26:40 old-k8s-version-776336 kubelet[730]: I0520 11:26:40.310649     730 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9880d6ea4fbcfba04698cb8c4772564d24fc633aa79e6b35ccba51d3677dbb30
	May 20 11:26:40 old-k8s-version-776336 kubelet[730]: E0520 11:26:40.310989     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	May 20 11:26:46 old-k8s-version-776336 kubelet[730]: E0520 11:26:46.315105     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:26:53 old-k8s-version-776336 kubelet[730]: I0520 11:26:53.310649     730 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9880d6ea4fbcfba04698cb8c4772564d24fc633aa79e6b35ccba51d3677dbb30
	May 20 11:26:53 old-k8s-version-776336 kubelet[730]: E0520 11:26:53.310994     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	May 20 11:26:57 old-k8s-version-776336 kubelet[730]: E0520 11:26:57.312314     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:27:06 old-k8s-version-776336 kubelet[730]: I0520 11:27:06.311130     730 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9880d6ea4fbcfba04698cb8c4772564d24fc633aa79e6b35ccba51d3677dbb30
	May 20 11:27:06 old-k8s-version-776336 kubelet[730]: E0520 11:27:06.311519     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	May 20 11:27:08 old-k8s-version-776336 kubelet[730]: E0520 11:27:08.334331     730 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	May 20 11:27:08 old-k8s-version-776336 kubelet[730]: E0520 11:27:08.334385     730 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	May 20 11:27:08 old-k8s-version-776336 kubelet[730]: E0520 11:27:08.334514     730 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-lstwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-qdks4_kube-system(eb2910c
3-cbac-462e-901e-4fb79caf95bf): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	May 20 11:27:08 old-k8s-version-776336 kubelet[730]: E0520 11:27:08.334543     730 pod_workers.go:191] Error syncing pod eb2910c3-cbac-462e-901e-4fb79caf95bf ("metrics-server-9975d5f86-qdks4_kube-system(eb2910c3-cbac-462e-901e-4fb79caf95bf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 20 11:27:17 old-k8s-version-776336 kubelet[730]: I0520 11:27:17.310600     730 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9880d6ea4fbcfba04698cb8c4772564d24fc633aa79e6b35ccba51d3677dbb30
	May 20 11:27:17 old-k8s-version-776336 kubelet[730]: E0520 11:27:17.310927     730 pod_workers.go:191] Error syncing pod 2c684adc-682c-4c95-a80c-90b35ac50afd ("dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tzcxg_kubernetes-dashboard(2c684adc-682c-4c95-a80c-90b35ac50afd)"
	
	
	==> kubernetes-dashboard [478b985a70c80d44c1dc3a606124f6d8d1a1b16caa15815f4d319199182b0973] <==
	2024/05/20 11:21:49 Using namespace: kubernetes-dashboard
	2024/05/20 11:21:49 Using in-cluster config to connect to apiserver
	2024/05/20 11:21:49 Using secret token for csrf signing
	2024/05/20 11:21:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/05/20 11:21:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/05/20 11:21:49 Successful initial request to the apiserver, version: v1.20.0
	2024/05/20 11:21:49 Generating JWE encryption key
	2024/05/20 11:21:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/05/20 11:21:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/05/20 11:21:51 Initializing JWE encryption key from synchronized object
	2024/05/20 11:21:51 Creating in-cluster Sidecar client
	2024/05/20 11:21:52 Serving insecurely on HTTP port: 9090
	2024/05/20 11:21:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:22:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:22:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:23:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:23:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:24:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:24:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:25:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:25:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:26:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:26:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:21:49 Starting overwatch
	
	
	==> storage-provisioner [80aac8daed1fe77959700c7a1f1a34853c09f3dbeb46648d0ab015345f91e607] <==
	I0520 11:19:27.221501       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:19:27.248233       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:19:27.248373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:19:27.279001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:19:27.279233       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-776336_8c64fc10-b564-4274-87de-2aab05e36e61!
	I0520 11:19:27.281383       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"445e794e-cf50-4b51-9c2b-3eb437caaac3", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-776336_8c64fc10-b564-4274-87de-2aab05e36e61 became leader
	I0520 11:19:27.380359       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-776336_8c64fc10-b564-4274-87de-2aab05e36e61!
	I0520 11:21:30.756969       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:21:30.772996       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:21:30.773128       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:21:48.250771       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:21:48.251717       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-776336_e4d92e54-c78c-4ba9-8674-7ab6051a02e7!
	I0520 11:21:48.251322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"445e794e-cf50-4b51-9c2b-3eb437caaac3", APIVersion:"v1", ResourceVersion:"769", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-776336_e4d92e54-c78c-4ba9-8674-7ab6051a02e7 became leader
	I0520 11:21:48.357048       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-776336_e4d92e54-c78c-4ba9-8674-7ab6051a02e7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-776336 -n old-k8s-version-776336
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-776336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-qdks4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-776336 describe pod metrics-server-9975d5f86-qdks4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-776336 describe pod metrics-server-9975d5f86-qdks4: exit status 1 (105.644483ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-qdks4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-776336 describe pod metrics-server-9975d5f86-qdks4: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.68s)

                                                
                                    

Test pass (295/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.94
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.19
12 TestDownloadOnly/v1.30.1/json-events 7.12
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.07
18 TestDownloadOnly/v1.30.1/DeleteAll 0.2
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 232.18
29 TestAddons/parallel/Registry 15.83
31 TestAddons/parallel/InspektorGadget 11.8
35 TestAddons/parallel/CSI 55.35
36 TestAddons/parallel/Headlamp 9.96
37 TestAddons/parallel/CloudSpanner 5.57
38 TestAddons/parallel/LocalPath 51.46
39 TestAddons/parallel/NvidiaDevicePlugin 6.59
40 TestAddons/parallel/Yakd 5
43 TestAddons/serial/GCPAuth/Namespaces 0.16
44 TestAddons/StoppedEnableDisable 12.19
45 TestCertOptions 38.03
46 TestCertExpiration 241.41
48 TestForceSystemdFlag 40.85
49 TestForceSystemdEnv 44.29
55 TestErrorSpam/setup 28.58
56 TestErrorSpam/start 0.73
57 TestErrorSpam/status 0.96
58 TestErrorSpam/pause 1.64
59 TestErrorSpam/unpause 1.77
60 TestErrorSpam/stop 1.41
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 76.32
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 28.74
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.11
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.77
72 TestFunctional/serial/CacheCmd/cache/add_local 1.07
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.96
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.12
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 41.57
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.69
83 TestFunctional/serial/LogsFileCmd 1.76
84 TestFunctional/serial/InvalidService 4.26
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 12.78
88 TestFunctional/parallel/DryRun 0.44
89 TestFunctional/parallel/InternationalLanguage 0.2
90 TestFunctional/parallel/StatusCmd 1.24
94 TestFunctional/parallel/ServiceCmdConnect 12.73
95 TestFunctional/parallel/AddonsCmd 0.28
96 TestFunctional/parallel/PersistentVolumeClaim 28.08
98 TestFunctional/parallel/SSHCmd 0.69
99 TestFunctional/parallel/CpCmd 2.22
101 TestFunctional/parallel/FileSync 0.33
102 TestFunctional/parallel/CertSync 2.05
106 TestFunctional/parallel/NodeLabels 0.12
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
110 TestFunctional/parallel/License 0.27
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.38
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
123 TestFunctional/parallel/ServiceCmd/List 0.95
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
126 TestFunctional/parallel/ProfileCmd/profile_list 0.53
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
129 TestFunctional/parallel/ServiceCmd/Format 0.61
130 TestFunctional/parallel/MountCmd/any-port 7.26
131 TestFunctional/parallel/ServiceCmd/URL 0.41
132 TestFunctional/parallel/MountCmd/specific-port 2.42
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.54
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.22
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
140 TestFunctional/parallel/ImageCommands/ImageBuild 2.64
141 TestFunctional/parallel/ImageCommands/Setup 2.28
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.66
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.9
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.16
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.87
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.24
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.95
152 TestFunctional/delete_addon-resizer_images 0.08
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 161
159 TestMultiControlPlane/serial/DeployApp 6.29
160 TestMultiControlPlane/serial/PingHostFromPods 1.63
161 TestMultiControlPlane/serial/AddWorkerNode 24.96
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.74
164 TestMultiControlPlane/serial/CopyFile 18.73
165 TestMultiControlPlane/serial/StopSecondaryNode 12.71
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
167 TestMultiControlPlane/serial/RestartSecondaryNode 22.16
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 7.17
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 201.36
170 TestMultiControlPlane/serial/DeleteSecondaryNode 12.88
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
172 TestMultiControlPlane/serial/StopCluster 35.85
173 TestMultiControlPlane/serial/RestartCluster 81.35
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
175 TestMultiControlPlane/serial/AddSecondaryNode 60.02
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
180 TestJSONOutput/start/Command 74.93
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.75
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.68
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.91
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.21
205 TestKicCustomNetwork/create_custom_network 42.77
206 TestKicCustomNetwork/use_default_bridge_network 33.96
207 TestKicExistingNetwork 36.68
208 TestKicCustomSubnet 34.34
209 TestKicStaticIP 36.17
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 69.3
214 TestMountStart/serial/StartWithMountFirst 9.41
215 TestMountStart/serial/VerifyMountFirst 0.26
216 TestMountStart/serial/StartWithMountSecond 6.42
217 TestMountStart/serial/VerifyMountSecond 0.26
218 TestMountStart/serial/DeleteFirst 1.59
219 TestMountStart/serial/VerifyMountPostDelete 0.24
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 8.23
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 119.92
226 TestMultiNode/serial/DeployApp2Nodes 4.82
227 TestMultiNode/serial/PingHostFrom2Pods 0.99
228 TestMultiNode/serial/AddNode 46.75
229 TestMultiNode/serial/MultiNodeLabels 0.1
230 TestMultiNode/serial/ProfileList 0.38
231 TestMultiNode/serial/CopyFile 9.77
232 TestMultiNode/serial/StopNode 2.21
233 TestMultiNode/serial/StartAfterStop 10.08
234 TestMultiNode/serial/RestartKeepsNodes 83.5
235 TestMultiNode/serial/DeleteNode 5.31
236 TestMultiNode/serial/StopMultiNode 23.91
237 TestMultiNode/serial/RestartMultiNode 59.18
238 TestMultiNode/serial/ValidateNameConflict 32.37
243 TestPreload 112.19
245 TestScheduledStopUnix 106.71
248 TestInsufficientStorage 10.44
249 TestRunningBinaryUpgrade 76.11
251 TestKubernetesUpgrade 397.62
252 TestMissingContainerUpgrade 143
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 39.24
256 TestNoKubernetes/serial/StartWithStopK8s 7.46
257 TestNoKubernetes/serial/Start 9.37
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
259 TestNoKubernetes/serial/ProfileList 0.88
260 TestNoKubernetes/serial/Stop 1.26
261 TestNoKubernetes/serial/StartNoArgs 7.84
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
263 TestStoppedBinaryUpgrade/Setup 1.06
264 TestStoppedBinaryUpgrade/Upgrade 72.96
265 TestStoppedBinaryUpgrade/MinikubeLogs 2.42
274 TestPause/serial/Start 77.24
275 TestPause/serial/SecondStartNoReconfiguration 36.06
276 TestPause/serial/Pause 0.9
277 TestPause/serial/VerifyStatus 0.44
278 TestPause/serial/Unpause 1.22
279 TestPause/serial/PauseAgain 1.55
280 TestPause/serial/DeletePaused 3.12
281 TestPause/serial/VerifyDeletedResources 2.72
289 TestNetworkPlugins/group/false 4.79
294 TestStartStop/group/old-k8s-version/serial/FirstStart 184.35
296 TestStartStop/group/no-preload/serial/FirstStart 71.37
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.85
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.08
299 TestStartStop/group/old-k8s-version/serial/Stop 12.34
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
302 TestStartStop/group/no-preload/serial/DeployApp 8.35
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
304 TestStartStop/group/no-preload/serial/Stop 12.27
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
306 TestStartStop/group/no-preload/serial/SecondStart 289.19
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
310 TestStartStop/group/no-preload/serial/Pause 3.23
312 TestStartStop/group/embed-certs/serial/FirstStart 78.31
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
316 TestStartStop/group/old-k8s-version/serial/Pause 3
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.96
319 TestStartStop/group/embed-certs/serial/DeployApp 10.32
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
321 TestStartStop/group/embed-certs/serial/Stop 12.01
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
323 TestStartStop/group/embed-certs/serial/SecondStart 266.64
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.36
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.76
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 303.62
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.14
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
332 TestStartStop/group/embed-certs/serial/Pause 3.1
334 TestStartStop/group/newest-cni/serial/FirstStart 43.8
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
337 TestStartStop/group/newest-cni/serial/Stop 1.26
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
339 TestStartStop/group/newest-cni/serial/SecondStart 17.94
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
344 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
345 TestStartStop/group/newest-cni/serial/Pause 2.81
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
347 TestNetworkPlugins/group/auto/Start 59.49
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.96
349 TestNetworkPlugins/group/kindnet/Start 82.67
350 TestNetworkPlugins/group/auto/KubeletFlags 0.28
351 TestNetworkPlugins/group/auto/NetCatPod 10.29
352 TestNetworkPlugins/group/auto/DNS 0.2
353 TestNetworkPlugins/group/auto/Localhost 0.16
354 TestNetworkPlugins/group/auto/HairPin 0.16
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/calico/Start 76.83
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
358 TestNetworkPlugins/group/kindnet/NetCatPod 10.65
359 TestNetworkPlugins/group/kindnet/DNS 0.28
360 TestNetworkPlugins/group/kindnet/Localhost 0.28
361 TestNetworkPlugins/group/kindnet/HairPin 0.18
362 TestNetworkPlugins/group/custom-flannel/Start 59.66
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.31
365 TestNetworkPlugins/group/calico/NetCatPod 11.28
366 TestNetworkPlugins/group/calico/DNS 0.31
367 TestNetworkPlugins/group/calico/Localhost 0.2
368 TestNetworkPlugins/group/calico/HairPin 0.2
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.39
371 TestNetworkPlugins/group/custom-flannel/DNS 0.24
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
374 TestNetworkPlugins/group/enable-default-cni/Start 91.95
375 TestNetworkPlugins/group/flannel/Start 70.54
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
380 TestNetworkPlugins/group/flannel/NetCatPod 9.25
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
384 TestNetworkPlugins/group/flannel/DNS 0.17
385 TestNetworkPlugins/group/flannel/Localhost 0.15
386 TestNetworkPlugins/group/flannel/HairPin 0.17
387 TestNetworkPlugins/group/bridge/Start 87.6
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
389 TestNetworkPlugins/group/bridge/NetCatPod 11.26
390 TestNetworkPlugins/group/bridge/DNS 0.17
391 TestNetworkPlugins/group/bridge/Localhost 0.15
392 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (7.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-801226 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-801226 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.934971646s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-801226
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-801226: exit status 85 (71.60285ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-801226 | jenkins | v1.33.1 | 20 May 24 10:24 UTC |          |
	|         | -p download-only-801226        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:24:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:24:51.740309 1469083 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:24:51.740467 1469083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:24:51.740476 1469083 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:51.740482 1469083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:24:51.740726 1469083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	W0520 10:24:51.740880 1469083 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18925-1463640/.minikube/config/config.json: open /home/jenkins/minikube-integration/18925-1463640/.minikube/config/config.json: no such file or directory
	I0520 10:24:51.741276 1469083 out.go:298] Setting JSON to true
	I0520 10:24:51.742177 1469083 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":151639,"bootTime":1716049053,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0520 10:24:51.742249 1469083 start.go:139] virtualization:  
	I0520 10:24:51.745244 1469083 out.go:97] [download-only-801226] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 10:24:51.747270 1469083 out.go:169] MINIKUBE_LOCATION=18925
	W0520 10:24:51.745434 1469083 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 10:24:51.745502 1469083 notify.go:220] Checking for updates...
	I0520 10:24:51.751004 1469083 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:24:51.752876 1469083 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 10:24:51.754954 1469083 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	I0520 10:24:51.756735 1469083 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0520 10:24:51.760025 1469083 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 10:24:51.760329 1469083 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:24:51.782178 1469083 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 10:24:51.782295 1469083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:24:51.846191 1469083 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-20 10:24:51.836955759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:24:51.846299 1469083 docker.go:295] overlay module found
	I0520 10:24:51.848102 1469083 out.go:97] Using the docker driver based on user configuration
	I0520 10:24:51.848136 1469083 start.go:297] selected driver: docker
	I0520 10:24:51.848143 1469083 start.go:901] validating driver "docker" against <nil>
	I0520 10:24:51.848256 1469083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:24:51.913537 1469083 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-20 10:24:51.903357954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:24:51.913753 1469083 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:24:51.914030 1469083 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0520 10:24:51.914204 1469083 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 10:24:51.916210 1469083 out.go:169] Using Docker driver with root privileges
	I0520 10:24:51.917840 1469083 cni.go:84] Creating CNI manager for ""
	I0520 10:24:51.917860 1469083 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 10:24:51.917874 1469083 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 10:24:51.917967 1469083 start.go:340] cluster config:
	{Name:download-only-801226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-801226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:24:51.919673 1469083 out.go:97] Starting "download-only-801226" primary control-plane node in "download-only-801226" cluster
	I0520 10:24:51.919694 1469083 cache.go:121] Beginning downloading kic base image for docker with crio
	I0520 10:24:51.921452 1469083 out.go:97] Pulling base image v0.0.44-1715707529-18887 ...
	I0520 10:24:51.921476 1469083 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 10:24:51.921640 1469083 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0520 10:24:51.935770 1469083 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0520 10:24:51.935957 1469083 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory
	I0520 10:24:51.936059 1469083 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0520 10:24:51.997354 1469083 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0520 10:24:51.997379 1469083 cache.go:56] Caching tarball of preloaded images
	I0520 10:24:51.998317 1469083 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 10:24:52.006505 1469083 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 10:24:52.006561 1469083 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0520 10:24:52.109035 1469083 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-801226 host does not exist
	  To start a cluster, run: "minikube start -p download-only-801226"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-801226
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (7.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-692242 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-692242 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.115076323s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (7.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-692242
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-692242: exit status 85 (68.451897ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-801226 | jenkins | v1.33.1 | 20 May 24 10:24 UTC |                     |
	|         | -p download-only-801226        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:24 UTC |
	| delete  | -p download-only-801226        | download-only-801226 | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:25 UTC |
	| start   | -o=json --download-only        | download-only-692242 | jenkins | v1.33.1 | 20 May 24 10:25 UTC |                     |
	|         | -p download-only-692242        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:25:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:25:00.266163 1469249 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:25:00.266321 1469249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:25:00.266328 1469249 out.go:304] Setting ErrFile to fd 2...
	I0520 10:25:00.266334 1469249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:25:00.266695 1469249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 10:25:00.267247 1469249 out.go:298] Setting JSON to true
	I0520 10:25:00.281792 1469249 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":151648,"bootTime":1716049053,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0520 10:25:00.281889 1469249 start.go:139] virtualization:  
	I0520 10:25:00.286098 1469249 out.go:97] [download-only-692242] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 10:25:00.286629 1469249 notify.go:220] Checking for updates...
	I0520 10:25:00.297397 1469249 out.go:169] MINIKUBE_LOCATION=18925
	I0520 10:25:00.299516 1469249 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:25:00.301509 1469249 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 10:25:00.304182 1469249 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	I0520 10:25:00.307963 1469249 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0520 10:25:00.311647 1469249 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 10:25:00.312496 1469249 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:25:00.348353 1469249 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 10:25:00.348613 1469249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:25:00.451698 1469249 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-05-20 10:25:00.440295274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:25:00.451817 1469249 docker.go:295] overlay module found
	I0520 10:25:00.453807 1469249 out.go:97] Using the docker driver based on user configuration
	I0520 10:25:00.453852 1469249 start.go:297] selected driver: docker
	I0520 10:25:00.453861 1469249 start.go:901] validating driver "docker" against <nil>
	I0520 10:25:00.453993 1469249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:25:00.514386 1469249 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-05-20 10:25:00.503813865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:25:00.514573 1469249 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:25:00.514939 1469249 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0520 10:25:00.515139 1469249 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 10:25:00.524272 1469249 out.go:169] Using Docker driver with root privileges
	I0520 10:25:00.525881 1469249 cni.go:84] Creating CNI manager for ""
	I0520 10:25:00.525913 1469249 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0520 10:25:00.525933 1469249 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 10:25:00.526069 1469249 start.go:340] cluster config:
	{Name:download-only-692242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-692242 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:25:00.527833 1469249 out.go:97] Starting "download-only-692242" primary control-plane node in "download-only-692242" cluster
	I0520 10:25:00.527876 1469249 cache.go:121] Beginning downloading kic base image for docker with crio
	I0520 10:25:00.529636 1469249 out.go:97] Pulling base image v0.0.44-1715707529-18887 ...
	I0520 10:25:00.529766 1469249 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:25:00.529841 1469249 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0520 10:25:00.546000 1469249 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0520 10:25:00.546181 1469249 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory
	I0520 10:25:00.546204 1469249 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory, skipping pull
	I0520 10:25:00.546209 1469249 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in cache, skipping pull
	I0520 10:25:00.546217 1469249 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a as a tarball
	I0520 10:25:00.599064 1469249 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4
	I0520 10:25:00.599099 1469249 cache.go:56] Caching tarball of preloaded images
	I0520 10:25:00.599294 1469249 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:25:00.601144 1469249 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0520 10:25:00.601187 1469249 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4 ...
	I0520 10:25:00.710569 1469249 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:a3311b98134f2386d0a6251840019f9e -> /home/jenkins/minikube-integration/18925-1463640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-692242 host does not exist
	  To start a cluster, run: "minikube start -p download-only-692242"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-692242
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-390288 --alsologtostderr --binary-mirror http://127.0.0.1:33461 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-390288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-390288
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-091599
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-091599: exit status 85 (71.582805ms)

                                                
                                                
-- stdout --
	* Profile "addons-091599" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-091599"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-091599
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-091599: exit status 85 (77.398766ms)

                                                
                                                
-- stdout --
	* Profile "addons-091599" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-091599"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (232.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-091599 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-091599 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m52.177111768s)
--- PASS: TestAddons/Setup (232.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 63.698417ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-c9mld" [2c38d8b7-c7e2-4b49-a2c6-ce2a95367d53] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004263707s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2mv7g" [4c0da18b-a7b2-46aa-9e52-c5273f77fb67] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004768444s
addons_test.go:340: (dbg) Run:  kubectl --context addons-091599 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-091599 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-091599 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.848897196s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 ip
2024/05/20 10:29:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dz5x4" [a00a381f-98f9-4091-bd45-1210086fa8f7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004878611s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-091599
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-091599: (5.79450236s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 7.307078ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-091599 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-091599 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2765e9e2-4699-45fb-bc05-65ac949f6168] Pending
helpers_test.go:344: "task-pv-pod" [2765e9e2-4699-45fb-bc05-65ac949f6168] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2765e9e2-4699-45fb-bc05-65ac949f6168] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003600076s
addons_test.go:584: (dbg) Run:  kubectl --context addons-091599 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-091599 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-091599 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-091599 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-091599 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-091599 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d6fda90b-bdd4-47c6-aad9-1beeca6c8882] Pending
helpers_test.go:344: "task-pv-pod-restore" [d6fda90b-bdd4-47c6-aad9-1beeca6c8882] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d6fda90b-bdd4-47c6-aad9-1beeca6c8882] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003436023s
addons_test.go:626: (dbg) Run:  kubectl --context addons-091599 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-091599 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-091599 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-091599 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.723084448s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (9.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-091599 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-kzpsq" [189e3662-a2ca-4f44-9c07-6e5bf879bde3] Pending
helpers_test.go:344: "headlamp-68456f997b-kzpsq" [189e3662-a2ca-4f44-9c07-6e5bf879bde3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-kzpsq" [189e3662-a2ca-4f44-9c07-6e5bf879bde3] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-kzpsq" [189e3662-a2ca-4f44-9c07-6e5bf879bde3] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.005105413s
--- PASS: TestAddons/parallel/Headlamp (9.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-7x4d7" [bf84e67f-ad10-459a-85da-658f7a7ad740] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004098272s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-091599
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-091599 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-091599 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091599 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c7fcfce8-e28e-445b-902b-7d2f087e1c2b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c7fcfce8-e28e-445b-902b-7d2f087e1c2b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c7fcfce8-e28e-445b-902b-7d2f087e1c2b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00607338s
addons_test.go:891: (dbg) Run:  kubectl --context addons-091599 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 ssh "cat /opt/local-path-provisioner/pvc-2b457869-27d5-410a-999e-eb21b51d4e81_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-091599 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-091599 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-091599 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-091599 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.287299829s)
--- PASS: TestAddons/parallel/LocalPath (51.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xt86b" [e96a5492-ba66-4969-aaa2-03c1ea00e071] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005216964s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-091599
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-zk8ph" [b80d9411-a36d-43ca-b43d-5be9aa33fad1] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003623934s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-091599 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-091599 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-091599
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-091599: (11.918032054s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-091599
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-091599
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-091599
--- PASS: TestAddons/StoppedEnableDisable (12.19s)

                                                
                                    
x
+
TestCertOptions (38.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-069594 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-069594 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.370523353s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-069594 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-069594 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-069594 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-069594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-069594
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-069594: (1.994704265s)
--- PASS: TestCertOptions (38.03s)

                                                
                                    
x
+
TestCertExpiration (241.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-052084 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-052084 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.436711491s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-052084 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-052084 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.63783767s)
helpers_test.go:175: Cleaning up "cert-expiration-052084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-052084
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-052084: (2.330801279s)
--- PASS: TestCertExpiration (241.41s)

                                                
                                    
x
+
TestForceSystemdFlag (40.85s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-358577 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-358577 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.225649449s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-358577 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-358577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-358577
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-358577: (2.338978974s)
--- PASS: TestForceSystemdFlag (40.85s)

                                                
                                    
x
+
TestForceSystemdEnv (44.29s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-085097 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-085097 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.460135663s)
helpers_test.go:175: Cleaning up "force-systemd-env-085097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-085097
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-085097: (2.825603843s)
--- PASS: TestForceSystemdEnv (44.29s)

                                                
                                    
x
+
TestErrorSpam/setup (28.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-256181 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-256181 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-256181 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-256181 --driver=docker  --container-runtime=crio: (28.574944154s)
--- PASS: TestErrorSpam/setup (28.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 unpause
--- PASS: TestErrorSpam/unpause (1.77s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 stop: (1.213382931s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256181 --log_dir /tmp/nospam-256181 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18925-1463640/.minikube/files/etc/test/nested/copy/1469078/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-335695 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-335695 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.316542444s)
--- PASS: TestFunctional/serial/StartWithProxy (76.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.74s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-335695 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-335695 --alsologtostderr -v=8: (28.743332931s)
functional_test.go:659: soft start took 28.744668351s for "functional-335695" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.74s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-335695 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 cache add registry.k8s.io/pause:3.1: (1.168130252s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 cache add registry.k8s.io/pause:3.3: (1.205559215s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 cache add registry.k8s.io/pause:latest: (1.398526984s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-335695 /tmp/TestFunctionalserialCacheCmdcacheadd_local664644668/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 cache add minikube-local-cache-test:functional-335695
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 cache delete minikube-local-cache-test:functional-335695
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-335695
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-335695 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.761994ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 cache reload: (1.060014329s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 kubectl -- --context functional-335695 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-335695 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-335695 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0520 10:39:01.219612 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:01.226681 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:01.236937 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:01.257102 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:01.297378 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:01.377729 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:01.538144 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:01.858690 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:02.499498 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:03.779664 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:06.339868 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:11.460591 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:39:21.701220 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-335695 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.567245101s)
functional_test.go:757: restart took 41.567359798s for "functional-335695" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-335695 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 logs: (1.690657831s)
--- PASS: TestFunctional/serial/LogsCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 logs --file /tmp/TestFunctionalserialLogsFileCmd8085008/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 logs --file /tmp/TestFunctionalserialLogsFileCmd8085008/001/logs.txt: (1.754417684s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-335695 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-335695
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-335695: exit status 115 (508.586428ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31049 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-335695 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-335695 config get cpus: exit status 14 (82.943744ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-335695 config get cpus: exit status 14 (58.497759ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-335695 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-335695 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1495284: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-335695 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-335695 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (206.59733ms)

                                                
                                                
-- stdout --
	* [functional-335695] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:40:04.651750 1494992 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:40:04.652001 1494992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:40:04.652043 1494992 out.go:304] Setting ErrFile to fd 2...
	I0520 10:40:04.652073 1494992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:40:04.652402 1494992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 10:40:04.652856 1494992 out.go:298] Setting JSON to false
	I0520 10:40:04.653993 1494992 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":152552,"bootTime":1716049053,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0520 10:40:04.654085 1494992 start.go:139] virtualization:  
	I0520 10:40:04.657897 1494992 out.go:177] * [functional-335695] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 10:40:04.660589 1494992 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:40:04.660646 1494992 notify.go:220] Checking for updates...
	I0520 10:40:04.664140 1494992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:40:04.666995 1494992 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 10:40:04.669541 1494992 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	I0520 10:40:04.672282 1494992 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 10:40:04.675475 1494992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:40:04.678611 1494992 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:40:04.679171 1494992 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:40:04.708039 1494992 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 10:40:04.708191 1494992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:40:04.780606 1494992 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-05-20 10:40:04.770919925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:40:04.780718 1494992 docker.go:295] overlay module found
	I0520 10:40:04.783645 1494992 out.go:177] * Using the docker driver based on existing profile
	I0520 10:40:04.786328 1494992 start.go:297] selected driver: docker
	I0520 10:40:04.786352 1494992 start.go:901] validating driver "docker" against &{Name:functional-335695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-335695 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:40:04.786505 1494992 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:40:04.789501 1494992 out.go:177] 
	W0520 10:40:04.792079 1494992 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0520 10:40:04.794783 1494992 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-335695 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-335695 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-335695 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (202.165897ms)

                                                
                                                
-- stdout --
	* [functional-335695] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:40:04.448531 1494950 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:40:04.448681 1494950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:40:04.448706 1494950 out.go:304] Setting ErrFile to fd 2...
	I0520 10:40:04.448723 1494950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:40:04.449097 1494950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 10:40:04.449497 1494950 out.go:298] Setting JSON to false
	I0520 10:40:04.450570 1494950 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":152552,"bootTime":1716049053,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0520 10:40:04.450645 1494950 start.go:139] virtualization:  
	I0520 10:40:04.454034 1494950 out.go:177] * [functional-335695] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0520 10:40:04.457510 1494950 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:40:04.461507 1494950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:40:04.457698 1494950 notify.go:220] Checking for updates...
	I0520 10:40:04.466690 1494950 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 10:40:04.469390 1494950 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	I0520 10:40:04.473501 1494950 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 10:40:04.476190 1494950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:40:04.479485 1494950 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:40:04.480086 1494950 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:40:04.501326 1494950 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 10:40:04.501457 1494950 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:40:04.574293 1494950 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-05-20 10:40:04.560562951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:40:04.574405 1494950 docker.go:295] overlay module found
	I0520 10:40:04.577250 1494950 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0520 10:40:04.579804 1494950 start.go:297] selected driver: docker
	I0520 10:40:04.579825 1494950 start.go:901] validating driver "docker" against &{Name:functional-335695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-335695 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:40:04.579947 1494950 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:40:04.583065 1494950 out.go:177] 
	W0520 10:40:04.585820 1494950 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0520 10:40:04.588346 1494950 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-335695 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-335695 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-kbzrq" [48ad52c1-889b-4df7-a513-63bc2a5f1d9a] Pending
E0520 10:39:42.183224 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
helpers_test.go:344: "hello-node-connect-6f49f58cd5-kbzrq" [48ad52c1-889b-4df7-a513-63bc2a5f1d9a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-kbzrq" [48ad52c1-889b-4df7-a513-63bc2a5f1d9a] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004512261s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30953
functional_test.go:1671: http://192.168.49.2:30953: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-kbzrq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30953
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [590499d0-c2d1-41a5-bd5b-dbf9b20d69df] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004438196s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-335695 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-335695 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-335695 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-335695 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e493c080-a330-4c85-bb92-570068405b09] Pending
helpers_test.go:344: "sp-pod" [e493c080-a330-4c85-bb92-570068405b09] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e493c080-a330-4c85-bb92-570068405b09] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004005828s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-335695 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-335695 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-335695 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [997cf076-78c3-4443-8a8f-e7f194bcf2c6] Pending
helpers_test.go:344: "sp-pod" [997cf076-78c3-4443-8a8f-e7f194bcf2c6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [997cf076-78c3-4443-8a8f-e7f194bcf2c6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.047641002s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-335695 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh -n functional-335695 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 cp functional-335695:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3847875690/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh -n functional-335695 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh -n functional-335695 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1469078/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "sudo cat /etc/test/nested/copy/1469078/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1469078.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "sudo cat /etc/ssl/certs/1469078.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1469078.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "sudo cat /usr/share/ca-certificates/1469078.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14690782.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "sudo cat /etc/ssl/certs/14690782.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14690782.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "sudo cat /usr/share/ca-certificates/14690782.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-335695 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-335695 ssh "sudo systemctl is-active docker": exit status 1 (355.287803ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-335695 ssh "sudo systemctl is-active containerd": exit status 1 (382.771503ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-335695 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-335695 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-335695 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-335695 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1493067: os: process already finished
helpers_test.go:508: unable to kill pid 1492889: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-335695 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-335695 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [005f916b-bd25-4bfc-99c6-f7549b7b66d9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [005f916b-bd25-4bfc-99c6-f7549b7b66d9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004005307s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-335695 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.115.203 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-335695 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-335695 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-335695 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-r2j2b" [b6e048aa-c42d-4502-8d6e-dc33c59519af] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-r2j2b" [b6e048aa-c42d-4502-8d6e-dc33c59519af] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005003012s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 service list -o json
functional_test.go:1490: Took "630.632312ms" to run "out/minikube-linux-arm64 -p functional-335695 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "462.271994ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "69.779871ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "440.603727ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "92.662427ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31829
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-335695 /tmp/TestFunctionalparallelMountCmdany-port2663271729/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716201602152265969" to /tmp/TestFunctionalparallelMountCmdany-port2663271729/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716201602152265969" to /tmp/TestFunctionalparallelMountCmdany-port2663271729/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716201602152265969" to /tmp/TestFunctionalparallelMountCmdany-port2663271729/001/test-1716201602152265969
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-335695 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (485.475019ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 20 10:40 created-by-test
-rw-r--r-- 1 docker docker 24 May 20 10:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 20 10:40 test-1716201602152265969
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh cat /mount-9p/test-1716201602152265969
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-335695 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [52f60664-4959-49e8-a48f-0cc18a9deb64] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [52f60664-4959-49e8-a48f-0cc18a9deb64] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [52f60664-4959-49e8-a48f-0cc18a9deb64] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003932387s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-335695 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-335695 /tmp/TestFunctionalparallelMountCmdany-port2663271729/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31829
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-335695 /tmp/TestFunctionalparallelMountCmdspecific-port2964400842/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-335695 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (498.778844ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-335695 /tmp/TestFunctionalparallelMountCmdspecific-port2964400842/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-335695 ssh "sudo umount -f /mount-9p": exit status 1 (351.86534ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-335695 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-335695 /tmp/TestFunctionalparallelMountCmdspecific-port2964400842/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-335695 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393733987/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-335695 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393733987/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-335695 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393733987/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-335695 ssh "findmnt -T" /mount1: exit status 1 (851.825934ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-335695 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-335695 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393733987/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-335695 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393733987/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-335695 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393733987/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 version -o=json --components: (1.221776958s)
--- PASS: TestFunctional/parallel/Version/components (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-335695 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-335695
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-335695 image ls --format short --alsologtostderr:
I0520 10:40:35.997378 1497584 out.go:291] Setting OutFile to fd 1 ...
I0520 10:40:35.997533 1497584 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:40:35.997545 1497584 out.go:304] Setting ErrFile to fd 2...
I0520 10:40:35.997550 1497584 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:40:35.997837 1497584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
I0520 10:40:35.998524 1497584 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:40:35.998661 1497584 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:40:35.999160 1497584 cli_runner.go:164] Run: docker container inspect functional-335695 --format={{.State.Status}}
I0520 10:40:36.026360 1497584 ssh_runner.go:195] Run: systemctl --version
I0520 10:40:36.026435 1497584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-335695
I0520 10:40:36.060476 1497584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40507 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/functional-335695/id_rsa Username:docker}
I0520 10:40:36.154994 1497584 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-335695 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.30.1            | 05eccb821e159 | 89.1MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4740c1948d3fc | 60.9MB |
| gcr.io/google-containers/addon-resizer  | functional-335695  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 988b55d423baf | 114MB  |
| docker.io/library/nginx                 | latest             | 8dd77ef2d82ea | 197MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-scheduler          | v1.30.1            | 163ff818d154d | 61.6MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | 9d6767b714bf1 | 51.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 234ac56e455be | 108MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-335695 image ls --format table --alsologtostderr:
I0520 10:40:36.293637 1497645 out.go:291] Setting OutFile to fd 1 ...
I0520 10:40:36.293880 1497645 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:40:36.293910 1497645 out.go:304] Setting ErrFile to fd 2...
I0520 10:40:36.293933 1497645 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:40:36.294204 1497645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
I0520 10:40:36.294883 1497645 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:40:36.295053 1497645 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:40:36.295570 1497645 cli_runner.go:164] Run: docker container inspect functional-335695 --format={{.State.Status}}
I0520 10:40:36.314404 1497645 ssh_runner.go:195] Run: systemctl --version
I0520 10:40:36.314458 1497645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-335695
I0520 10:40:36.335571 1497645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40507 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/functional-335695/id_rsa Username:docker}
I0520 10:40:36.438428 1497645 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-335695 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-335695"],"size":"34114467"},{"id":"234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:7107370c7cd3eba054a9326c2856988e79c9364e0244c53026dd87111c8e1882"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"108229958"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size
":"247562353"},{"id":"8dd77ef2d82eade8dcf2c08ea032bd9cba04c9d28ace2ccf08ad6804c27bf14f","repoDigests":["docker.io/library/nginx@sha256:557b2c07439ee9e53cb178e3bdbb87114b31c48a41a17c8750c5908d65adeec6","docker.io/library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c"],"repoTags":["docker.io/library/nginx:latest"],"size":"197095429"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/sto
rage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:fba503a1eff02dfe4d3c91ad7f52cb6d298fe53709046e9025a35ef9af20e236"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"61568326"},{"id":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494e
a"],"size":"60940831"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee","repoDigests":["registry.k8s.io/kube-proxy@sha256:40a978ff6e378a33e3508910a74993bf9b442ad0d97c7b939f4324db51602c28","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"89133975"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e88
1e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"9d6767b714bf1ecd2cdab75b590f2c572ac33743c7786ef5d619f7b088dbe0bb","repoDigests":["docker.io/library/nginx@sha256:05325b3a32db871dc396a859d9a9609d75f50d2f7ad12194f9f3a550111bdcaa","docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00"],"repoTags":["docker.io/library/nginx:alpine"],"size":"51540272"},{"id":"988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a30512343
87b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:9015c784f0e3e72028f801f3331bf3149db3c04b9212bc53f08c1e8924597bf7"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"113538528"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-335695 image ls --format json --alsologtostderr:
I0520 10:40:36.285193 1497640 out.go:291] Setting OutFile to fd 1 ...
I0520 10:40:36.285322 1497640 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:40:36.285328 1497640 out.go:304] Setting ErrFile to fd 2...
I0520 10:40:36.285333 1497640 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:40:36.285564 1497640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
I0520 10:40:36.286238 1497640 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:40:36.286402 1497640 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:40:36.286947 1497640 cli_runner.go:164] Run: docker container inspect functional-335695 --format={{.State.Status}}
I0520 10:40:36.304449 1497640 ssh_runner.go:195] Run: systemctl --version
I0520 10:40:36.304576 1497640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-335695
I0520 10:40:36.323068 1497640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40507 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/functional-335695/id_rsa Username:docker}
I0520 10:40:36.410360 1497640 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-335695 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-335695
size: "34114467"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:7107370c7cd3eba054a9326c2856988e79c9364e0244c53026dd87111c8e1882
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "108229958"
- id: 05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee
repoDigests:
- registry.k8s.io/kube-proxy@sha256:40a978ff6e378a33e3508910a74993bf9b442ad0d97c7b939f4324db51602c28
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "89133975"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8dd77ef2d82eade8dcf2c08ea032bd9cba04c9d28ace2ccf08ad6804c27bf14f
repoDigests:
- docker.io/library/nginx@sha256:557b2c07439ee9e53cb178e3bdbb87114b31c48a41a17c8750c5908d65adeec6
- docker.io/library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c
repoTags:
- docker.io/library/nginx:latest
size: "197095429"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "60940831"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 9d6767b714bf1ecd2cdab75b590f2c572ac33743c7786ef5d619f7b088dbe0bb
repoDigests:
- docker.io/library/nginx@sha256:05325b3a32db871dc396a859d9a9609d75f50d2f7ad12194f9f3a550111bdcaa
- docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00
repoTags:
- docker.io/library/nginx:alpine
size: "51540272"
- id: 988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:9015c784f0e3e72028f801f3331bf3149db3c04b9212bc53f08c1e8924597bf7
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "113538528"
- id: 163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:fba503a1eff02dfe4d3c91ad7f52cb6d298fe53709046e9025a35ef9af20e236
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "61568326"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-335695 image ls --format yaml --alsologtostderr:
I0520 10:40:36.004197 1497585 out.go:291] Setting OutFile to fd 1 ...
I0520 10:40:36.004474 1497585 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:40:36.004505 1497585 out.go:304] Setting ErrFile to fd 2...
I0520 10:40:36.004558 1497585 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:40:36.004898 1497585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
I0520 10:40:36.005889 1497585 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:40:36.006110 1497585 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:40:36.006740 1497585 cli_runner.go:164] Run: docker container inspect functional-335695 --format={{.State.Status}}
I0520 10:40:36.027899 1497585 ssh_runner.go:195] Run: systemctl --version
I0520 10:40:36.027957 1497585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-335695
I0520 10:40:36.051793 1497585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40507 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/functional-335695/id_rsa Username:docker}
I0520 10:40:36.144185 1497585 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-335695 ssh pgrep buildkitd: exit status 1 (279.023048ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image build -t localhost/my-image:functional-335695 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 image build -t localhost/my-image:functional-335695 testdata/build --alsologtostderr: (2.126142925s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-335695 image build -t localhost/my-image:functional-335695 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 38e163f4e75
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-335695
--> 89555d5a138
Successfully tagged localhost/my-image:functional-335695
89555d5a138b772717c63a22e7af4de0b4d2073550e7cbc0f47bd8c5c34fc2ea
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-335695 image build -t localhost/my-image:functional-335695 testdata/build --alsologtostderr:
I0520 10:40:36.793233 1497747 out.go:291] Setting OutFile to fd 1 ...
I0520 10:40:36.794229 1497747 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:40:36.794260 1497747 out.go:304] Setting ErrFile to fd 2...
I0520 10:40:36.794278 1497747 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:40:36.794637 1497747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
I0520 10:40:36.795334 1497747 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:40:36.796015 1497747 config.go:182] Loaded profile config "functional-335695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:40:36.796543 1497747 cli_runner.go:164] Run: docker container inspect functional-335695 --format={{.State.Status}}
I0520 10:40:36.813491 1497747 ssh_runner.go:195] Run: systemctl --version
I0520 10:40:36.813549 1497747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-335695
I0520 10:40:36.832494 1497747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40507 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/functional-335695/id_rsa Username:docker}
I0520 10:40:36.922333 1497747 build_images.go:161] Building image from path: /tmp/build.1930137812.tar
I0520 10:40:36.922418 1497747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0520 10:40:36.931679 1497747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1930137812.tar
I0520 10:40:36.935171 1497747 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1930137812.tar: stat -c "%s %y" /var/lib/minikube/build/build.1930137812.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1930137812.tar': No such file or directory
I0520 10:40:36.935201 1497747 ssh_runner.go:362] scp /tmp/build.1930137812.tar --> /var/lib/minikube/build/build.1930137812.tar (3072 bytes)
I0520 10:40:36.967961 1497747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1930137812
I0520 10:40:36.977342 1497747 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1930137812 -xf /var/lib/minikube/build/build.1930137812.tar
I0520 10:40:36.986708 1497747 crio.go:315] Building image: /var/lib/minikube/build/build.1930137812
I0520 10:40:36.986823 1497747 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-335695 /var/lib/minikube/build/build.1930137812 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0520 10:40:38.845869 1497747 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-335695 /var/lib/minikube/build/build.1930137812 --cgroup-manager=cgroupfs: (1.85901086s)
I0520 10:40:38.845957 1497747 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1930137812
I0520 10:40:38.854804 1497747 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1930137812.tar
I0520 10:40:38.863701 1497747 build_images.go:217] Built localhost/my-image:functional-335695 from /tmp/build.1930137812.tar
I0520 10:40:38.863734 1497747 build_images.go:133] succeeded building to: functional-335695
I0520 10:40:38.863740 1497747 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/05/20 10:40:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.253268483s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-335695
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image load --daemon gcr.io/google-containers/addon-resizer:functional-335695 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 image load --daemon gcr.io/google-containers/addon-resizer:functional-335695 --alsologtostderr: (5.423468396s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image ls
E0520 10:40:23.144021 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image load --daemon gcr.io/google-containers/addon-resizer:functional-335695 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 image load --daemon gcr.io/google-containers/addon-resizer:functional-335695 --alsologtostderr: (2.680867553s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.257475724s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-335695
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image load --daemon gcr.io/google-containers/addon-resizer:functional-335695 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 image load --daemon gcr.io/google-containers/addon-resizer:functional-335695 --alsologtostderr: (3.661279393s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image save gcr.io/google-containers/addon-resizer:functional-335695 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image rm gcr.io/google-containers/addon-resizer:functional-335695 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-335695 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.017003816s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-335695
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-335695 image save --daemon gcr.io/google-containers/addon-resizer:functional-335695 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-335695
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-335695
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-335695
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-335695
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (161s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-122174 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0520 10:41:45.064317 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-122174 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m40.159541948s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (161.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-122174 -- rollout status deployment/busybox: (3.38785074s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-782r7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-8l8qn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-9zglw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-782r7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-8l8qn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-9zglw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-782r7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-8l8qn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-9zglw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-782r7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-782r7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-8l8qn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-8l8qn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-9zglw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-122174 -- exec busybox-fc5497c4f-9zglw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-122174 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-122174 -v=7 --alsologtostderr: (23.974524109s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-122174 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp testdata/cp-test.txt ha-122174:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile164687387/001/cp-test_ha-122174.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174:/home/docker/cp-test.txt ha-122174-m02:/home/docker/cp-test_ha-122174_ha-122174-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m02 "sudo cat /home/docker/cp-test_ha-122174_ha-122174-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174:/home/docker/cp-test.txt ha-122174-m03:/home/docker/cp-test_ha-122174_ha-122174-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m03 "sudo cat /home/docker/cp-test_ha-122174_ha-122174-m03.txt"
E0520 10:44:01.218925 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174:/home/docker/cp-test.txt ha-122174-m04:/home/docker/cp-test_ha-122174_ha-122174-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m04 "sudo cat /home/docker/cp-test_ha-122174_ha-122174-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp testdata/cp-test.txt ha-122174-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile164687387/001/cp-test_ha-122174-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m02:/home/docker/cp-test.txt ha-122174:/home/docker/cp-test_ha-122174-m02_ha-122174.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174 "sudo cat /home/docker/cp-test_ha-122174-m02_ha-122174.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m02:/home/docker/cp-test.txt ha-122174-m03:/home/docker/cp-test_ha-122174-m02_ha-122174-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m03 "sudo cat /home/docker/cp-test_ha-122174-m02_ha-122174-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m02:/home/docker/cp-test.txt ha-122174-m04:/home/docker/cp-test_ha-122174-m02_ha-122174-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m04 "sudo cat /home/docker/cp-test_ha-122174-m02_ha-122174-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp testdata/cp-test.txt ha-122174-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile164687387/001/cp-test_ha-122174-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m03:/home/docker/cp-test.txt ha-122174:/home/docker/cp-test_ha-122174-m03_ha-122174.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174 "sudo cat /home/docker/cp-test_ha-122174-m03_ha-122174.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m03:/home/docker/cp-test.txt ha-122174-m02:/home/docker/cp-test_ha-122174-m03_ha-122174-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m02 "sudo cat /home/docker/cp-test_ha-122174-m03_ha-122174-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m03:/home/docker/cp-test.txt ha-122174-m04:/home/docker/cp-test_ha-122174-m03_ha-122174-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m04 "sudo cat /home/docker/cp-test_ha-122174-m03_ha-122174-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp testdata/cp-test.txt ha-122174-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile164687387/001/cp-test_ha-122174-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m04:/home/docker/cp-test.txt ha-122174:/home/docker/cp-test_ha-122174-m04_ha-122174.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174 "sudo cat /home/docker/cp-test_ha-122174-m04_ha-122174.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m04:/home/docker/cp-test.txt ha-122174-m02:/home/docker/cp-test_ha-122174-m04_ha-122174-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m02 "sudo cat /home/docker/cp-test_ha-122174-m04_ha-122174-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 cp ha-122174-m04:/home/docker/cp-test.txt ha-122174-m03:/home/docker/cp-test_ha-122174-m04_ha-122174-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 ssh -n ha-122174-m03 "sudo cat /home/docker/cp-test_ha-122174-m04_ha-122174-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-122174 node stop m02 -v=7 --alsologtostderr: (11.993218511s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr: exit status 7 (717.108937ms)

                                                
                                                
-- stdout --
	ha-122174
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-122174-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-122174-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-122174-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:44:27.520310 1512555 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:44:27.520448 1512555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:44:27.520453 1512555 out.go:304] Setting ErrFile to fd 2...
	I0520 10:44:27.520459 1512555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:44:27.520713 1512555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 10:44:27.521026 1512555 out.go:298] Setting JSON to false
	I0520 10:44:27.521053 1512555 mustload.go:65] Loading cluster: ha-122174
	I0520 10:44:27.521519 1512555 config.go:182] Loaded profile config "ha-122174": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:44:27.521536 1512555 status.go:255] checking status of ha-122174 ...
	I0520 10:44:27.521764 1512555 notify.go:220] Checking for updates...
	I0520 10:44:27.522157 1512555 cli_runner.go:164] Run: docker container inspect ha-122174 --format={{.State.Status}}
	I0520 10:44:27.540983 1512555 status.go:330] ha-122174 host status = "Running" (err=<nil>)
	I0520 10:44:27.541021 1512555 host.go:66] Checking if "ha-122174" exists ...
	I0520 10:44:27.541337 1512555 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-122174
	I0520 10:44:27.560942 1512555 host.go:66] Checking if "ha-122174" exists ...
	I0520 10:44:27.561335 1512555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:44:27.561384 1512555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-122174
	I0520 10:44:27.586107 1512555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40512 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/ha-122174/id_rsa Username:docker}
	I0520 10:44:27.677340 1512555 ssh_runner.go:195] Run: systemctl --version
	I0520 10:44:27.683218 1512555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:44:27.696367 1512555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:44:27.771729 1512555 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:72 SystemTime:2024-05-20 10:44:27.760971999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:44:27.772371 1512555 kubeconfig.go:125] found "ha-122174" server: "https://192.168.49.254:8443"
	I0520 10:44:27.772389 1512555 api_server.go:166] Checking apiserver status ...
	I0520 10:44:27.772431 1512555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:44:27.784215 1512555 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1413/cgroup
	I0520 10:44:27.796920 1512555 api_server.go:182] apiserver freezer: "3:freezer:/docker/3f4944a459ff45df39572d785dbc4fce57466173de990b628433a80d9f19b056/crio/crio-f59646f41a68d72ca78afcfc060afa87ffcf7ec125202cf37398522b068e482e"
	I0520 10:44:27.797000 1512555 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3f4944a459ff45df39572d785dbc4fce57466173de990b628433a80d9f19b056/crio/crio-f59646f41a68d72ca78afcfc060afa87ffcf7ec125202cf37398522b068e482e/freezer.state
	I0520 10:44:27.806097 1512555 api_server.go:204] freezer state: "THAWED"
	I0520 10:44:27.806126 1512555 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0520 10:44:27.813937 1512555 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0520 10:44:27.813963 1512555 status.go:422] ha-122174 apiserver status = Running (err=<nil>)
	I0520 10:44:27.813974 1512555 status.go:257] ha-122174 status: &{Name:ha-122174 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:44:27.813991 1512555 status.go:255] checking status of ha-122174-m02 ...
	I0520 10:44:27.814295 1512555 cli_runner.go:164] Run: docker container inspect ha-122174-m02 --format={{.State.Status}}
	I0520 10:44:27.829433 1512555 status.go:330] ha-122174-m02 host status = "Stopped" (err=<nil>)
	I0520 10:44:27.829455 1512555 status.go:343] host is not running, skipping remaining checks
	I0520 10:44:27.829462 1512555 status.go:257] ha-122174-m02 status: &{Name:ha-122174-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:44:27.829482 1512555 status.go:255] checking status of ha-122174-m03 ...
	I0520 10:44:27.829902 1512555 cli_runner.go:164] Run: docker container inspect ha-122174-m03 --format={{.State.Status}}
	I0520 10:44:27.847924 1512555 status.go:330] ha-122174-m03 host status = "Running" (err=<nil>)
	I0520 10:44:27.847946 1512555 host.go:66] Checking if "ha-122174-m03" exists ...
	I0520 10:44:27.848265 1512555 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-122174-m03
	I0520 10:44:27.866223 1512555 host.go:66] Checking if "ha-122174-m03" exists ...
	I0520 10:44:27.866532 1512555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:44:27.866584 1512555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-122174-m03
	I0520 10:44:27.883873 1512555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40522 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/ha-122174-m03/id_rsa Username:docker}
	I0520 10:44:27.970803 1512555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:44:27.986701 1512555 kubeconfig.go:125] found "ha-122174" server: "https://192.168.49.254:8443"
	I0520 10:44:27.986729 1512555 api_server.go:166] Checking apiserver status ...
	I0520 10:44:27.986773 1512555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:44:27.997836 1512555 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1350/cgroup
	I0520 10:44:28.009102 1512555 api_server.go:182] apiserver freezer: "3:freezer:/docker/12d52c889cf7fa6e8934920c0eb62d4f889e13483bf0805686f6bcd379faa89a/crio/crio-f87e96aca2027344ff1f6f7091b04fd30a2ba8cc0884cb1d6accb91771dffa93"
	I0520 10:44:28.009207 1512555 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/12d52c889cf7fa6e8934920c0eb62d4f889e13483bf0805686f6bcd379faa89a/crio/crio-f87e96aca2027344ff1f6f7091b04fd30a2ba8cc0884cb1d6accb91771dffa93/freezer.state
	I0520 10:44:28.019297 1512555 api_server.go:204] freezer state: "THAWED"
	I0520 10:44:28.019327 1512555 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0520 10:44:28.026985 1512555 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0520 10:44:28.027020 1512555 status.go:422] ha-122174-m03 apiserver status = Running (err=<nil>)
	I0520 10:44:28.027030 1512555 status.go:257] ha-122174-m03 status: &{Name:ha-122174-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:44:28.027048 1512555 status.go:255] checking status of ha-122174-m04 ...
	I0520 10:44:28.027374 1512555 cli_runner.go:164] Run: docker container inspect ha-122174-m04 --format={{.State.Status}}
	I0520 10:44:28.045512 1512555 status.go:330] ha-122174-m04 host status = "Running" (err=<nil>)
	I0520 10:44:28.045539 1512555 host.go:66] Checking if "ha-122174-m04" exists ...
	I0520 10:44:28.045890 1512555 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-122174-m04
	I0520 10:44:28.063000 1512555 host.go:66] Checking if "ha-122174-m04" exists ...
	I0520 10:44:28.063300 1512555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:44:28.063352 1512555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-122174-m04
	I0520 10:44:28.080453 1512555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40527 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/ha-122174-m04/id_rsa Username:docker}
	I0520 10:44:28.170608 1512555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:44:28.184359 1512555 status.go:257] ha-122174-m04 status: &{Name:ha-122174-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 node start m02 -v=7 --alsologtostderr
E0520 10:44:28.905410 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:44:32.162286 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:44:32.167932 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:44:32.178114 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:44:32.198341 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:44:32.238541 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:44:32.318792 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:44:32.478923 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:44:32.799990 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:44:33.440215 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:44:34.720991 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:44:37.282074 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:44:42.402561 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-122174 node start m02 -v=7 --alsologtostderr: (20.517302363s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr: (1.507408841s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (7.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0520 10:44:52.642918 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (7.165738343s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (7.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (201.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-122174 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-122174 -v=7 --alsologtostderr
E0520 10:45:13.124144 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-122174 -v=7 --alsologtostderr: (36.943888545s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-122174 --wait=true -v=7 --alsologtostderr
E0520 10:45:54.084651 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:47:16.005191 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-122174 --wait=true -v=7 --alsologtostderr: (2m44.274205352s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-122174
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (201.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-122174 node delete m03 -v=7 --alsologtostderr: (12.00011996s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 stop -v=7 --alsologtostderr
E0520 10:49:01.219608 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-122174 stop -v=7 --alsologtostderr: (35.749707438s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr: exit status 7 (103.41083ms)

                                                
                                                
-- stdout --
	ha-122174
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-122174-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-122174-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:49:08.595349 1526653 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:49:08.595498 1526653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:49:08.595509 1526653 out.go:304] Setting ErrFile to fd 2...
	I0520 10:49:08.595515 1526653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:49:08.595756 1526653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 10:49:08.595939 1526653 out.go:298] Setting JSON to false
	I0520 10:49:08.595987 1526653 mustload.go:65] Loading cluster: ha-122174
	I0520 10:49:08.596145 1526653 notify.go:220] Checking for updates...
	I0520 10:49:08.596412 1526653 config.go:182] Loaded profile config "ha-122174": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:49:08.596431 1526653 status.go:255] checking status of ha-122174 ...
	I0520 10:49:08.596915 1526653 cli_runner.go:164] Run: docker container inspect ha-122174 --format={{.State.Status}}
	I0520 10:49:08.614321 1526653 status.go:330] ha-122174 host status = "Stopped" (err=<nil>)
	I0520 10:49:08.614342 1526653 status.go:343] host is not running, skipping remaining checks
	I0520 10:49:08.614350 1526653 status.go:257] ha-122174 status: &{Name:ha-122174 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:49:08.614378 1526653 status.go:255] checking status of ha-122174-m02 ...
	I0520 10:49:08.614701 1526653 cli_runner.go:164] Run: docker container inspect ha-122174-m02 --format={{.State.Status}}
	I0520 10:49:08.630773 1526653 status.go:330] ha-122174-m02 host status = "Stopped" (err=<nil>)
	I0520 10:49:08.630806 1526653 status.go:343] host is not running, skipping remaining checks
	I0520 10:49:08.630814 1526653 status.go:257] ha-122174-m02 status: &{Name:ha-122174-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:49:08.630845 1526653 status.go:255] checking status of ha-122174-m04 ...
	I0520 10:49:08.631157 1526653 cli_runner.go:164] Run: docker container inspect ha-122174-m04 --format={{.State.Status}}
	I0520 10:49:08.651347 1526653 status.go:330] ha-122174-m04 host status = "Stopped" (err=<nil>)
	I0520 10:49:08.651366 1526653 status.go:343] host is not running, skipping remaining checks
	I0520 10:49:08.651383 1526653 status.go:257] ha-122174-m04 status: &{Name:ha-122174-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-122174 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0520 10:49:32.161983 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 10:49:59.846348 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-122174 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.390308373s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (60.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-122174 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-122174 --control-plane -v=7 --alsologtostderr: (59.018531853s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-122174 status -v=7 --alsologtostderr: (1.00303173s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (60.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-372528 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-372528 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m14.929603275s)
--- PASS: TestJSONOutput/start/Command (74.93s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-372528 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-372528 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-372528 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-372528 --output=json --user=testUser: (5.913486564s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-240848 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-240848 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.455573ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0a862b57-31f1-4fd0-b333-d5fed733aab4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-240848] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2f1d99df-a5f1-4c54-9ac4-fa44c45d3d76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18925"}}
	{"specversion":"1.0","id":"9a71b738-211d-452d-9b8f-5040fc18cf82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"32610a08-181a-4166-8fbe-8f7ff0357a1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig"}}
	{"specversion":"1.0","id":"92ea00fd-018e-4b6a-9fcd-e624f1dae6f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube"}}
	{"specversion":"1.0","id":"1493a5da-8bdb-4544-96ce-9019951a7736","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c513112d-6466-42ce-8c1a-4450461ab0e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d19aeaa9-6b2e-470b-a56c-c8e2ce667da8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-240848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-240848
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.77s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-321886 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-321886 --network=: (40.743953096s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-321886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-321886
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-321886: (1.996923411s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.77s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-879375 --network=bridge
E0520 10:54:01.219604 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-879375 --network=bridge: (31.958004745s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-879375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-879375
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-879375: (1.970386724s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.96s)

                                                
                                    
x
+
TestKicExistingNetwork (36.68s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-921491 --network=existing-network
E0520 10:54:32.162693 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-921491 --network=existing-network: (34.552290518s)
helpers_test.go:175: Cleaning up "existing-network-921491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-921491
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-921491: (1.989905351s)
--- PASS: TestKicExistingNetwork (36.68s)

                                                
                                    
x
+
TestKicCustomSubnet (34.34s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-814693 --subnet=192.168.60.0/24
E0520 10:55:24.265747 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-814693 --subnet=192.168.60.0/24: (32.258252025s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-814693 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-814693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-814693
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-814693: (2.068559746s)
--- PASS: TestKicCustomSubnet (34.34s)

                                                
                                    
x
+
TestKicStaticIP (36.17s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-523614 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-523614 --static-ip=192.168.200.200: (33.884430524s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-523614 ip
helpers_test.go:175: Cleaning up "static-ip-523614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-523614
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-523614: (2.142385563s)
--- PASS: TestKicStaticIP (36.17s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-688534 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-688534 --driver=docker  --container-runtime=crio: (31.366733934s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-691365 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-691365 --driver=docker  --container-runtime=crio: (32.838898085s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-688534
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-691365
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-691365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-691365
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-691365: (1.990656341s)
helpers_test.go:175: Cleaning up "first-688534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-688534
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-688534: (1.953556412s)
--- PASS: TestMinikubeProfile (69.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-980217 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-980217 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.4134729s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-980217 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-993329 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-993329 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.42178382s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-993329 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-980217 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-980217 --alsologtostderr -v=5: (1.592101474s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-993329 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-993329
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-993329: (1.204581889s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-993329
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-993329: (7.232399096s)
--- PASS: TestMountStart/serial/RestartStopped (8.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-993329 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-145607 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0520 10:59:01.219607 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 10:59:32.162053 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-145607 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m59.411088035s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-145607 -- rollout status deployment/busybox: (2.94983169s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- exec busybox-fc5497c4f-7xsfn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- exec busybox-fc5497c4f-g55n2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- exec busybox-fc5497c4f-7xsfn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- exec busybox-fc5497c4f-g55n2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- exec busybox-fc5497c4f-7xsfn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- exec busybox-fc5497c4f-g55n2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- exec busybox-fc5497c4f-7xsfn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- exec busybox-fc5497c4f-7xsfn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- exec busybox-fc5497c4f-g55n2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-145607 -- exec busybox-fc5497c4f-g55n2 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-145607 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-145607 -v 3 --alsologtostderr: (46.072271085s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-145607 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp testdata/cp-test.txt multinode-145607:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp multinode-145607:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2702778597/001/cp-test_multinode-145607.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp multinode-145607:/home/docker/cp-test.txt multinode-145607-m02:/home/docker/cp-test_multinode-145607_multinode-145607-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m02 "sudo cat /home/docker/cp-test_multinode-145607_multinode-145607-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp multinode-145607:/home/docker/cp-test.txt multinode-145607-m03:/home/docker/cp-test_multinode-145607_multinode-145607-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m03 "sudo cat /home/docker/cp-test_multinode-145607_multinode-145607-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp testdata/cp-test.txt multinode-145607-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp multinode-145607-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2702778597/001/cp-test_multinode-145607-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp multinode-145607-m02:/home/docker/cp-test.txt multinode-145607:/home/docker/cp-test_multinode-145607-m02_multinode-145607.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607 "sudo cat /home/docker/cp-test_multinode-145607-m02_multinode-145607.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp multinode-145607-m02:/home/docker/cp-test.txt multinode-145607-m03:/home/docker/cp-test_multinode-145607-m02_multinode-145607-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m03 "sudo cat /home/docker/cp-test_multinode-145607-m02_multinode-145607-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp testdata/cp-test.txt multinode-145607-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp multinode-145607-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2702778597/001/cp-test_multinode-145607-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp multinode-145607-m03:/home/docker/cp-test.txt multinode-145607:/home/docker/cp-test_multinode-145607-m03_multinode-145607.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607 "sudo cat /home/docker/cp-test_multinode-145607-m03_multinode-145607.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 cp multinode-145607-m03:/home/docker/cp-test.txt multinode-145607-m02:/home/docker/cp-test_multinode-145607-m03_multinode-145607-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 ssh -n multinode-145607-m02 "sudo cat /home/docker/cp-test_multinode-145607-m03_multinode-145607-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-145607 node stop m03: (1.213705231s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-145607 status: exit status 7 (502.473322ms)

                                                
                                                
-- stdout --
	multinode-145607
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-145607-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-145607-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-145607 status --alsologtostderr: exit status 7 (489.60447ms)

                                                
                                                
-- stdout --
	multinode-145607
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-145607-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-145607-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:00:53.922355 1576653 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:00:53.922585 1576653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:00:53.922623 1576653 out.go:304] Setting ErrFile to fd 2...
	I0520 11:00:53.922643 1576653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:00:53.922898 1576653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 11:00:53.923102 1576653 out.go:298] Setting JSON to false
	I0520 11:00:53.923167 1576653 mustload.go:65] Loading cluster: multinode-145607
	I0520 11:00:53.923238 1576653 notify.go:220] Checking for updates...
	I0520 11:00:53.923601 1576653 config.go:182] Loaded profile config "multinode-145607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:00:53.923635 1576653 status.go:255] checking status of multinode-145607 ...
	I0520 11:00:53.924549 1576653 cli_runner.go:164] Run: docker container inspect multinode-145607 --format={{.State.Status}}
	I0520 11:00:53.946908 1576653 status.go:330] multinode-145607 host status = "Running" (err=<nil>)
	I0520 11:00:53.946938 1576653 host.go:66] Checking if "multinode-145607" exists ...
	I0520 11:00:53.947249 1576653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-145607
	I0520 11:00:53.964316 1576653 host.go:66] Checking if "multinode-145607" exists ...
	I0520 11:00:53.964697 1576653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 11:00:53.964753 1576653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-145607
	I0520 11:00:53.985565 1576653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40632 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/multinode-145607/id_rsa Username:docker}
	I0520 11:00:54.084863 1576653 ssh_runner.go:195] Run: systemctl --version
	I0520 11:00:54.089334 1576653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:00:54.101841 1576653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 11:00:54.155474 1576653 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-05-20 11:00:54.145477663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 11:00:54.156082 1576653 kubeconfig.go:125] found "multinode-145607" server: "https://192.168.67.2:8443"
	I0520 11:00:54.156109 1576653 api_server.go:166] Checking apiserver status ...
	I0520 11:00:54.156153 1576653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:00:54.167794 1576653 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1362/cgroup
	I0520 11:00:54.177158 1576653 api_server.go:182] apiserver freezer: "3:freezer:/docker/9b5085f89c21649630a9cb864d361ac2bb8947790eb89b7a5b6b8778ea9dc9b1/crio/crio-805416a78a445a3c8b09c5de1dc72dcd494e3ef2120b56e767c7863e703cee5f"
	I0520 11:00:54.177245 1576653 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9b5085f89c21649630a9cb864d361ac2bb8947790eb89b7a5b6b8778ea9dc9b1/crio/crio-805416a78a445a3c8b09c5de1dc72dcd494e3ef2120b56e767c7863e703cee5f/freezer.state
	I0520 11:00:54.185965 1576653 api_server.go:204] freezer state: "THAWED"
	I0520 11:00:54.185994 1576653 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0520 11:00:54.193530 1576653 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0520 11:00:54.193565 1576653 status.go:422] multinode-145607 apiserver status = Running (err=<nil>)
	I0520 11:00:54.193588 1576653 status.go:257] multinode-145607 status: &{Name:multinode-145607 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 11:00:54.193608 1576653 status.go:255] checking status of multinode-145607-m02 ...
	I0520 11:00:54.193954 1576653 cli_runner.go:164] Run: docker container inspect multinode-145607-m02 --format={{.State.Status}}
	I0520 11:00:54.210412 1576653 status.go:330] multinode-145607-m02 host status = "Running" (err=<nil>)
	I0520 11:00:54.210445 1576653 host.go:66] Checking if "multinode-145607-m02" exists ...
	I0520 11:00:54.210738 1576653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-145607-m02
	I0520 11:00:54.226173 1576653 host.go:66] Checking if "multinode-145607-m02" exists ...
	I0520 11:00:54.226492 1576653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 11:00:54.226544 1576653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-145607-m02
	I0520 11:00:54.242874 1576653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40637 SSHKeyPath:/home/jenkins/minikube-integration/18925-1463640/.minikube/machines/multinode-145607-m02/id_rsa Username:docker}
	I0520 11:00:54.331095 1576653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:00:54.342650 1576653 status.go:257] multinode-145607-m02 status: &{Name:multinode-145607-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0520 11:00:54.342687 1576653 status.go:255] checking status of multinode-145607-m03 ...
	I0520 11:00:54.342990 1576653 cli_runner.go:164] Run: docker container inspect multinode-145607-m03 --format={{.State.Status}}
	I0520 11:00:54.361128 1576653 status.go:330] multinode-145607-m03 host status = "Stopped" (err=<nil>)
	I0520 11:00:54.361152 1576653 status.go:343] host is not running, skipping remaining checks
	I0520 11:00:54.361160 1576653 status.go:257] multinode-145607-m03 status: &{Name:multinode-145607-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 node start m03 -v=7 --alsologtostderr
E0520 11:00:55.206558 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-145607 node start m03 -v=7 --alsologtostderr: (9.357588376s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-145607
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-145607
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-145607: (24.799598191s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-145607 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-145607 --wait=true -v=8 --alsologtostderr: (58.586188347s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-145607
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-145607 node delete m03: (4.652910327s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-145607 stop: (23.739784887s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-145607 status: exit status 7 (89.394288ms)

                                                
                                                
-- stdout --
	multinode-145607
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-145607-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-145607 status --alsologtostderr: exit status 7 (83.074705ms)

                                                
                                                
-- stdout --
	multinode-145607
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-145607-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:02:57.126216 1583731 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:02:57.126365 1583731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:02:57.126378 1583731 out.go:304] Setting ErrFile to fd 2...
	I0520 11:02:57.126385 1583731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:02:57.126679 1583731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 11:02:57.126891 1583731 out.go:298] Setting JSON to false
	I0520 11:02:57.126930 1583731 mustload.go:65] Loading cluster: multinode-145607
	I0520 11:02:57.127046 1583731 notify.go:220] Checking for updates...
	I0520 11:02:57.127378 1583731 config.go:182] Loaded profile config "multinode-145607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:02:57.127399 1583731 status.go:255] checking status of multinode-145607 ...
	I0520 11:02:57.127926 1583731 cli_runner.go:164] Run: docker container inspect multinode-145607 --format={{.State.Status}}
	I0520 11:02:57.145762 1583731 status.go:330] multinode-145607 host status = "Stopped" (err=<nil>)
	I0520 11:02:57.145787 1583731 status.go:343] host is not running, skipping remaining checks
	I0520 11:02:57.145795 1583731 status.go:257] multinode-145607 status: &{Name:multinode-145607 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 11:02:57.145818 1583731 status.go:255] checking status of multinode-145607-m02 ...
	I0520 11:02:57.146130 1583731 cli_runner.go:164] Run: docker container inspect multinode-145607-m02 --format={{.State.Status}}
	I0520 11:02:57.163600 1583731 status.go:330] multinode-145607-m02 host status = "Stopped" (err=<nil>)
	I0520 11:02:57.163629 1583731 status.go:343] host is not running, skipping remaining checks
	I0520 11:02:57.163637 1583731 status.go:257] multinode-145607-m02 status: &{Name:multinode-145607-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-145607 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-145607 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (58.475318962s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-145607 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.18s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-145607
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-145607-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-145607-m02 --driver=docker  --container-runtime=crio: exit status 14 (87.82418ms)

                                                
                                                
-- stdout --
	* [multinode-145607-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-145607-m02' is duplicated with machine name 'multinode-145607-m02' in profile 'multinode-145607'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-145607-m03 --driver=docker  --container-runtime=crio
E0520 11:04:01.219271 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-145607-m03 --driver=docker  --container-runtime=crio: (29.970560289s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-145607
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-145607: exit status 80 (323.049301ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-145607 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-145607-m03 already exists in multinode-145607-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-145607-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-145607-m03: (1.936135464s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.37s)

                                                
                                    
x
+
TestPreload (112.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-327127 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-327127 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m19.926042432s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-327127 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-327127 image pull gcr.io/k8s-minikube/busybox: (1.842294927s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-327127
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-327127: (6.015942817s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-327127 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-327127 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.821892035s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-327127 image list
helpers_test.go:175: Cleaning up "test-preload-327127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-327127
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-327127: (2.312821507s)
--- PASS: TestPreload (112.19s)

                                                
                                    
x
+
TestScheduledStopUnix (106.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-896134 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-896134 --memory=2048 --driver=docker  --container-runtime=crio: (30.303253746s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-896134 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-896134 -n scheduled-stop-896134
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-896134 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-896134 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-896134 -n scheduled-stop-896134
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-896134
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-896134 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-896134
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-896134: exit status 7 (71.912497ms)

                                                
                                                
-- stdout --
	scheduled-stop-896134
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-896134 -n scheduled-stop-896134
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-896134 -n scheduled-stop-896134: exit status 7 (67.181734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-896134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-896134
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-896134: (4.939013268s)
--- PASS: TestScheduledStopUnix (106.71s)

                                                
                                    
x
+
TestInsufficientStorage (10.44s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-033644 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-033644 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.946029852s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b10d5d29-63a6-4104-8cf3-9aa44294bca3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-033644] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92cf2113-523b-4046-bafb-795b8a5e7e0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18925"}}
	{"specversion":"1.0","id":"e93365d1-8efe-4fbc-a5bd-ecb760fb552c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"95f5c114-cd40-4fb7-b9b0-68dd236af5c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig"}}
	{"specversion":"1.0","id":"937b6354-fbbb-4353-8adc-95f8d971d49b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube"}}
	{"specversion":"1.0","id":"476bc260-d67f-45c0-b865-1b46c282c140","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"36e33228-5d66-4ce3-81d1-a06c947d3106","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"77d62e71-b921-4f6d-9016-481e8199587e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"751b7e9f-2183-4bea-85fd-b393c866ee66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5ece1fda-7ec7-40b8-b71b-7dda07336dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"00d93529-5106-4547-a154-6a4c9d2ebfb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0d5f7280-5416-4bb0-8b5f-d1ea34f2ae68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-033644\" primary control-plane node in \"insufficient-storage-033644\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a751049-f2a9-4490-9163-aabf24b3390a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1715707529-18887 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4968a5f-a356-48d1-918b-b9930e668e24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb7d9a08-5e15-4bc8-97e5-ebf9f15cae6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-033644 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-033644 --output=json --layout=cluster: exit status 7 (287.04449ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-033644","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-033644","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:08:19.936434 1600485 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-033644" does not appear in /home/jenkins/minikube-integration/18925-1463640/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-033644 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-033644 --output=json --layout=cluster: exit status 7 (302.841947ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-033644","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-033644","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:08:20.242821 1600539 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-033644" does not appear in /home/jenkins/minikube-integration/18925-1463640/kubeconfig
	E0520 11:08:20.253448 1600539 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/insufficient-storage-033644/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-033644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-033644
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-033644: (1.908126296s)
--- PASS: TestInsufficientStorage (10.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (76.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0520 11:12:04.266180 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.569946307 start -p running-upgrade-592513 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.569946307 start -p running-upgrade-592513 --memory=2200 --vm-driver=docker  --container-runtime=crio: (37.606869503s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-592513 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-592513 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.248981927s)
helpers_test.go:175: Cleaning up "running-upgrade-592513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-592513
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-592513: (3.16278539s)
--- PASS: TestRunningBinaryUpgrade (76.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (397.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-030212 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0520 11:09:32.162136 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-030212 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.802294543s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-030212
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-030212: (1.354623862s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-030212 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-030212 status --format={{.Host}}: exit status 7 (82.916946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-030212 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-030212 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m44.41101248s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-030212 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-030212 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-030212 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (107.342215ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-030212] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-030212
	    minikube start -p kubernetes-upgrade-030212 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0302122 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-030212 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-030212 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-030212 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.374734413s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-030212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-030212
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-030212: (2.365662564s)
--- PASS: TestKubernetesUpgrade (397.62s)

                                                
                                    
x
+
TestMissingContainerUpgrade (143s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2779587085 start -p missing-upgrade-955029 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2779587085 start -p missing-upgrade-955029 --memory=2200 --driver=docker  --container-runtime=crio: (1m11.101927704s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-955029
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-955029: (10.436980303s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-955029
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-955029 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-955029 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.092992381s)
helpers_test.go:175: Cleaning up "missing-upgrade-955029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-955029
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-955029: (2.153166265s)
--- PASS: TestMissingContainerUpgrade (143.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-388734 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-388734 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (78.853311ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-388734] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-388734 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-388734 --driver=docker  --container-runtime=crio: (38.804218126s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-388734 status -o json
E0520 11:09:01.218880 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-388734 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-388734 --no-kubernetes --driver=docker  --container-runtime=crio: (5.132317674s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-388734 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-388734 status -o json: exit status 2 (345.769952ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-388734","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-388734
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-388734: (1.985365022s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-388734 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-388734 --no-kubernetes --driver=docker  --container-runtime=crio: (9.371812612s)
--- PASS: TestNoKubernetes/serial/Start (9.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-388734 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-388734 "sudo systemctl is-active --quiet service kubelet": exit status 1 (256.496907ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-388734
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-388734: (1.258639624s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-388734 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-388734 --driver=docker  --container-runtime=crio: (7.839815598s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-388734 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-388734 "sudo systemctl is-active --quiet service kubelet": exit status 1 (335.493071ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (72.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3378815055 start -p stopped-upgrade-653975 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3378815055 start -p stopped-upgrade-653975 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.256392366s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3378815055 -p stopped-upgrade-653975 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3378815055 -p stopped-upgrade-653975 stop: (2.66112978s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-653975 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-653975 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.041040767s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (72.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-653975
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-653975: (2.419473226s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.42s)

                                                
                                    
x
+
TestPause/serial/Start (77.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-161290 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0520 11:14:01.218687 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 11:14:32.162834 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-161290 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m17.238129809s)
--- PASS: TestPause/serial/Start (77.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-161290 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-161290 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.029387272s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.06s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-161290 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-161290 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-161290 --output=json --layout=cluster: exit status 2 (438.340082ms)

                                                
                                                
-- stdout --
	{"Name":"pause-161290","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-161290","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.22s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-161290 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-161290 --alsologtostderr -v=5: (1.22100871s)
--- PASS: TestPause/serial/Unpause (1.22s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.55s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-161290 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-161290 --alsologtostderr -v=5: (1.553087095s)
--- PASS: TestPause/serial/PauseAgain (1.55s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-161290 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-161290 --alsologtostderr -v=5: (3.119685755s)
--- PASS: TestPause/serial/DeletePaused (3.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.72s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (2.630480219s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-161290
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-161290: exit status 1 (19.053303ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-161290: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-214342 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-214342 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (241.364901ms)

                                                
                                                
-- stdout --
	* [false-214342] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:16:08.713906 1640305 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:16:08.714224 1640305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:16:08.714260 1640305 out.go:304] Setting ErrFile to fd 2...
	I0520 11:16:08.714281 1640305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:16:08.714551 1640305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-1463640/.minikube/bin
	I0520 11:16:08.715086 1640305 out.go:298] Setting JSON to false
	I0520 11:16:08.716244 1640305 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":154716,"bootTime":1716049053,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0520 11:16:08.716344 1640305 start.go:139] virtualization:  
	I0520 11:16:08.720263 1640305 out.go:177] * [false-214342] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 11:16:08.723047 1640305 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:16:08.723118 1640305 notify.go:220] Checking for updates...
	I0520 11:16:08.725878 1640305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:16:08.728425 1640305 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-1463640/kubeconfig
	I0520 11:16:08.732333 1640305 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-1463640/.minikube
	I0520 11:16:08.735004 1640305 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 11:16:08.737440 1640305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:16:08.740822 1640305 config.go:182] Loaded profile config "force-systemd-env-085097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:16:08.740932 1640305 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:16:08.763770 1640305 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 11:16:08.763911 1640305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 11:16:08.855725 1640305 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-20 11:16:08.838061593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 11:16:08.855842 1640305 docker.go:295] overlay module found
	I0520 11:16:08.858776 1640305 out.go:177] * Using the docker driver based on user configuration
	I0520 11:16:08.860532 1640305 start.go:297] selected driver: docker
	I0520 11:16:08.860565 1640305 start.go:901] validating driver "docker" against <nil>
	I0520 11:16:08.860580 1640305 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:16:08.867964 1640305 out.go:177] 
	W0520 11:16:08.870294 1640305 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0520 11:16:08.872538 1640305 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-214342 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-214342" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-214342" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-214342

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-214342"

                                                
                                                
----------------------- debugLogs end: false-214342 [took: 4.348573971s] --------------------------------
helpers_test.go:175: Cleaning up "false-214342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-214342
--- PASS: TestNetworkPlugins/group/false (4.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (184.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-776336 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0520 11:17:35.207586 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 11:19:01.219351 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 11:19:32.162068 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-776336 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m4.354022115s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (184.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-027096 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-027096 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (1m11.370690039s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-776336 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4606619c-b796-4385-a484-a73de0587ef2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4606619c-b796-4385-a484-a73de0587ef2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.008241942s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-776336 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-776336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-776336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.647257118s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-776336 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-776336 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-776336 --alsologtostderr -v=3: (12.340208465s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-776336 -n old-k8s-version-776336
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-776336 -n old-k8s-version-776336: exit status 7 (121.369408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-776336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-027096 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2590367a-823b-4ab0-94a1-07014ed0d721] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2590367a-823b-4ab0-94a1-07014ed0d721] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003237971s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-027096 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-027096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-027096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.027391936s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-027096 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-027096 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-027096 --alsologtostderr -v=3: (12.270124867s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-027096 -n no-preload-027096
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-027096 -n no-preload-027096: exit status 7 (103.401643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-027096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-027096 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
E0520 11:24:01.219606 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 11:24:32.162214 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-027096 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (4m48.838829349s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-027096 -n no-preload-027096
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-54b99" [19e81e46-fd27-4db1-b520-97f389c9dd42] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004438729s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-54b99" [19e81e46-fd27-4db1-b520-97f389c9dd42] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00406309s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-027096 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-027096 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-027096 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-027096 --alsologtostderr -v=1: (1.049883539s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-027096 -n no-preload-027096
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-027096 -n no-preload-027096: exit status 2 (352.959132ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-027096 -n no-preload-027096
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-027096 -n no-preload-027096: exit status 2 (298.229151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-027096 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-027096 -n no-preload-027096
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-027096 -n no-preload-027096
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-746238 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-746238 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (1m18.313450954s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-cvhs4" [b285a064-dc50-4b4d-a785-1b050cfb0414] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00341175s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-cvhs4" [b285a064-dc50-4b4d-a785-1b050cfb0414] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004527576s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-776336 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-776336 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-776336 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-776336 -n old-k8s-version-776336
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-776336 -n old-k8s-version-776336: exit status 2 (323.896584ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-776336 -n old-k8s-version-776336
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-776336 -n old-k8s-version-776336: exit status 2 (343.120955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-776336 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-776336 -n old-k8s-version-776336
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-776336 -n old-k8s-version-776336
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-856451 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-856451 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (1m21.956625377s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-746238 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e1906d81-e4fb-46a8-b416-799328dccbcb] Pending
helpers_test.go:344: "busybox" [e1906d81-e4fb-46a8-b416-799328dccbcb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e1906d81-e4fb-46a8-b416-799328dccbcb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003249781s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-746238 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-746238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-746238 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-746238 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-746238 --alsologtostderr -v=3: (12.012428895s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-746238 -n embed-certs-746238
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-746238 -n embed-certs-746238: exit status 7 (86.06438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-746238 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-746238 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
E0520 11:28:44.267230 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-746238 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (4m26.28895412s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-746238 -n embed-certs-746238
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-856451 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [09036316-bac7-40b7-8cc8-78f16fb841bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0520 11:29:01.219583 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
helpers_test.go:344: "busybox" [09036316-bac7-40b7-8cc8-78f16fb841bd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004029915s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-856451 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-856451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-856451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.03908174s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-856451 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-856451 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-856451 --alsologtostderr -v=3: (12.757875021s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-856451 -n default-k8s-diff-port-856451
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-856451 -n default-k8s-diff-port-856451: exit status 7 (68.379627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-856451 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-856451 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
E0520 11:29:32.162527 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 11:30:35.357001 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:35.362335 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:35.372598 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:35.392858 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:35.433083 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:35.513323 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:35.673745 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:35.994306 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:36.634673 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:37.914897 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:40.475541 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:45.596580 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:30:55.837455 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:31:16.317707 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:31:31.694276 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:31.699637 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:31.709958 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:31.730225 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:31.770605 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:31.850916 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:32.011362 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:32.331843 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:32.972724 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:34.252949 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:36.813091 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:41.934217 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:52.175062 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:31:57.277980 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:32:12.655252 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
E0520 11:32:53.615472 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-856451 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (5m3.126618258s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-856451 -n default-k8s-diff-port-856451
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-k559l" [396868e3-8a64-4578-bb5a-cc9a49356f1e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004138681s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-k559l" [396868e3-8a64-4578-bb5a-cc9a49356f1e] Running
E0520 11:33:19.199707 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005466373s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-746238 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-746238 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-746238 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-746238 -n embed-certs-746238
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-746238 -n embed-certs-746238: exit status 2 (331.896278ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-746238 -n embed-certs-746238
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-746238 -n embed-certs-746238: exit status 2 (315.843101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-746238 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-746238 -n embed-certs-746238
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-746238 -n embed-certs-746238
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-364873 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
E0520 11:34:01.219018 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-364873 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (43.801844317s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-364873 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-364873 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-364873 --alsologtostderr -v=3: (1.264040488s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-364873 -n newest-cni-364873
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-364873 -n newest-cni-364873: exit status 7 (91.87326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-364873 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-364873 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
E0520 11:34:15.208023 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
E0520 11:34:15.536594 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-364873 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (17.573415946s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-364873 -n newest-cni-364873
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-bpgxn" [8f10db10-adf0-49c3-89c7-46de974a2131] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003530184s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-364873 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-bpgxn" [8f10db10-adf0-49c3-89c7-46de974a2131] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005060241s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-856451 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-364873 --alsologtostderr -v=1
E0520 11:34:32.162602 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-364873 -n newest-cni-364873
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-364873 -n newest-cni-364873: exit status 2 (297.361599ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-364873 -n newest-cni-364873
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-364873 -n newest-cni-364873: exit status 2 (311.266803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-364873 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-364873 -n newest-cni-364873
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-364873 -n newest-cni-364873
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-856451 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (59.484088888s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-856451 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-856451 --alsologtostderr -v=1: (1.230247577s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-856451 -n default-k8s-diff-port-856451
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-856451 -n default-k8s-diff-port-856451: exit status 2 (376.344697ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-856451 -n default-k8s-diff-port-856451
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-856451 -n default-k8s-diff-port-856451: exit status 2 (374.413384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-856451 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-856451 -n default-k8s-diff-port-856451
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-856451 -n default-k8s-diff-port-856451
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.96s)
E0520 11:40:21.563549 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:40:35.357213 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
E0520 11:40:36.884998 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:36.890323 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:36.900595 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:36.920871 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:36.961111 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:37.041604 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:37.202047 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:37.522635 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:38.163288 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:39.443449 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:42.007778 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:47.128826 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:40:57.369806 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:41:07.791179 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:07.796491 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:07.806666 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:07.827331 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:07.867630 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:07.947974 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:08.108244 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:08.428663 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:09.069220 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:10.350000 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:12.910597 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:17.850250 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/auto-214342/client.crt: no such file or directory
E0520 11:41:18.031646 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:28.272761 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
E0520 11:41:31.693626 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0520 11:35:35.357691 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/old-k8s-version-776336/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m22.673064513s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-214342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-214342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vqdnw" [9b4f17bc-8910-47f8-8836-228c80d9f9bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vqdnw" [9b4f17bc-8910-47f8-8836-228c80d9f9bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004673641s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-214342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-w7prq" [8dea8574-c87b-4c60-94a7-7d624b84d488] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004711499s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m16.82566202s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-214342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-214342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-s9dz5" [380c69cc-b4b7-41d7-b89b-14af636718ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-s9dz5" [380c69cc-b4b7-41d7-b89b-14af636718ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005023691s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-214342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0520 11:36:59.376834 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/no-preload-027096/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (59.663301572s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-z6vzq" [5d937a9a-0ac5-4d89-bbdc-263311dbffd6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005474947s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-214342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-214342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bnzlx" [45a5fd63-caf6-4eb3-b77a-c301ae97a447] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bnzlx" [45a5fd63-caf6-4eb3-b77a-c301ae97a447] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.008030817s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-214342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-214342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-214342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8nldp" [eca0fe3c-8164-4005-a60a-5f6671e6fec5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-8nldp" [eca0fe3c-8164-4005-a60a-5f6671e6fec5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004728955s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-214342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m31.950424755s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0520 11:38:59.642142 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:38:59.647377 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:38:59.657625 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:38:59.677882 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:38:59.718131 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:38:59.798593 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:38:59.958870 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:39:00.279157 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:39:00.919901 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:39:01.219201 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/addons-091599/client.crt: no such file or directory
E0520 11:39:02.200676 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:39:04.761163 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:39:09.881676 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:39:20.122324 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
E0520 11:39:32.162645 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/functional-335695/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m10.536784074s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kpcv8" [dbd3a9cb-7d97-4d07-9048-d9696812a514] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003619905s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-214342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-214342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6x6bs" [e8fe09b4-4cc2-4740-8b16-8b809e311316] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0520 11:39:40.602539 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-6x6bs" [e8fe09b4-4cc2-4740-8b16-8b809e311316] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004000114s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-214342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-214342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-99w25" [b65b70d3-6768-4011-a87b-e5ac9ad9fa11] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-99w25" [b65b70d3-6768-4011-a87b-e5ac9ad9fa11] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004328223s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-214342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-214342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-214342 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m27.595792855s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-214342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-214342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-t6w8s" [cb8c6475-4a5c-4d64-a187-5f7ff161da8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0520 11:41:43.483803 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/default-k8s-diff-port-856451/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-t6w8s" [cb8c6475-4a5c-4d64-a187-5f7ff161da8b] Running
E0520 11:41:48.753270 1469078 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kindnet-214342/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003198511s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-214342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-214342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (29/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-161399 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-161399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-161399
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-592371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-592371
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-214342 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-214342" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-214342" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18925-1463640/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 May 2024 11:15:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-030212
contexts:
- context:
cluster: kubernetes-upgrade-030212
extensions:
- extension:
last-update: Mon, 20 May 2024 11:15:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-030212
name: kubernetes-upgrade-030212
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-030212
user:
client-certificate: /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kubernetes-upgrade-030212/client.crt
client-key: /home/jenkins/minikube-integration/18925-1463640/.minikube/profiles/kubernetes-upgrade-030212/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-214342

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-214342"

                                                
                                                
----------------------- debugLogs end: kubenet-214342 [took: 4.219920922s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-214342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-214342
--- SKIP: TestNetworkPlugins/group/kubenet (4.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-214342 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-214342" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-214342

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-214342" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-214342"

                                                
                                                
----------------------- debugLogs end: cilium-214342 [took: 5.097220273s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-214342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-214342
--- SKIP: TestNetworkPlugins/group/cilium (5.28s)

                                                
                                    
Copied to clipboard