Test Report: Docker_Linux_crio_arm64 19360

                    
                      cd79d30fb13c14d30ca0dbfe151ef256c3a20136:2024-07-31:35589
                    
                

Test fail (3/336)

Order failed test Duration
43 TestAddons/parallel/Ingress 152.63
45 TestAddons/parallel/MetricsServer 353.7
361 TestStartStop/group/old-k8s-version/serial/SecondStart 374.06
x
+
TestAddons/parallel/Ingress (152.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-849486 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-849486 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-849486 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [954318df-d324-41f5-b62e-e87a9676c677] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [954318df-d324-41f5-b62e-e87a9676c677] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003606984s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-849486 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.448044332s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-849486 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-849486 addons disable ingress-dns --alsologtostderr -v=1: (1.714878363s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-849486 addons disable ingress --alsologtostderr -v=1: (7.726361912s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-849486
helpers_test.go:235: (dbg) docker inspect addons-849486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf",
	        "Created": "2024-07-31T22:31:58.031500435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1586136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-31T22:31:58.175549775Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf/hostname",
	        "HostsPath": "/var/lib/docker/containers/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf/hosts",
	        "LogPath": "/var/lib/docker/containers/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf-json.log",
	        "Name": "/addons-849486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-849486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-849486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/31353d832d5316f2cf2b976850e7985a5e9ed94c5773a72d9eb63ee2765a9f8c-init/diff:/var/lib/docker/overlay2/a3c8edb55465dd5b1044de542fb24c31e00154ba5ba4e9841112d37a01d06a98/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31353d832d5316f2cf2b976850e7985a5e9ed94c5773a72d9eb63ee2765a9f8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31353d832d5316f2cf2b976850e7985a5e9ed94c5773a72d9eb63ee2765a9f8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31353d832d5316f2cf2b976850e7985a5e9ed94c5773a72d9eb63ee2765a9f8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-849486",
	                "Source": "/var/lib/docker/volumes/addons-849486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-849486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-849486",
	                "name.minikube.sigs.k8s.io": "addons-849486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d1aa9829e876ee8e974ccd612766a7eb8a4d370a60753ebc330163f34fbac0c",
	            "SandboxKey": "/var/run/docker/netns/5d1aa9829e87",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34641"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34642"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34645"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34643"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34644"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-849486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6cf7ab4ffd119fe4bd883867754b7e9719f07178da2ab1a73467da450a3e07e7",
	                    "EndpointID": "cada104ec607bdc42b2e078063641c818fc4743dfe2fcc7202b917eaa23229af",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-849486",
	                        "110805b36784"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-849486 -n addons-849486
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-849486 logs -n 25: (1.381605083s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-586553                                                                     | download-only-586553   | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:31 UTC |
	| start   | --download-only -p                                                                          | download-docker-364967 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | download-docker-364967                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-364967                                                                   | download-docker-364967 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:31 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-970793   | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | binary-mirror-970793                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38085                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-970793                                                                     | binary-mirror-970793   | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:31 UTC |
	| addons  | disable dashboard -p                                                                        | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | addons-849486                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | addons-849486                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-849486 --wait=true                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:35 UTC | 31 Jul 24 22:35 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-849486 ip                                                                            | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:35 UTC | 31 Jul 24 22:35 UTC |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:35 UTC | 31 Jul 24 22:35 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | -p addons-849486                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-849486 ssh cat                                                                       | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | /opt/local-path-provisioner/pvc-9f22855a-010f-402c-a661-b7cd21d58d00_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:37 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-849486 addons                                                                        | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-849486 addons                                                                        | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | addons-849486                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | -p addons-849486                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:37 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:37 UTC | 31 Jul 24 22:37 UTC |
	|         | addons-849486                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-849486 ssh curl -s                                                                   | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:37 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-849486 ip                                                                            | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:39 UTC | 31 Jul 24 22:39 UTC |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:39 UTC | 31 Jul 24 22:39 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:39 UTC | 31 Jul 24 22:39 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:31:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:31:33.510910 1585635 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:31:33.511056 1585635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:31:33.511067 1585635 out.go:304] Setting ErrFile to fd 2...
	I0731 22:31:33.511072 1585635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:31:33.511314 1585635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 22:31:33.511782 1585635 out.go:298] Setting JSON to false
	I0731 22:31:33.512674 1585635 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22432,"bootTime":1722442662,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 22:31:33.512750 1585635 start.go:139] virtualization:  
	I0731 22:31:33.515566 1585635 out.go:177] * [addons-849486] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0731 22:31:33.518249 1585635 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 22:31:33.518426 1585635 notify.go:220] Checking for updates...
	I0731 22:31:33.522575 1585635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:31:33.524914 1585635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 22:31:33.527037 1585635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	I0731 22:31:33.529620 1585635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 22:31:33.531957 1585635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:31:33.534274 1585635 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:31:33.558578 1585635 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0731 22:31:33.558688 1585635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:31:33.617311 1585635 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-31 22:31:33.60785426 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:31:33.617437 1585635 docker.go:307] overlay module found
	I0731 22:31:33.624625 1585635 out.go:177] * Using the docker driver based on user configuration
	I0731 22:31:33.626637 1585635 start.go:297] selected driver: docker
	I0731 22:31:33.626655 1585635 start.go:901] validating driver "docker" against <nil>
	I0731 22:31:33.626669 1585635 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:31:33.627317 1585635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:31:33.698595 1585635 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-31 22:31:33.689787382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:31:33.698758 1585635 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 22:31:33.698984 1585635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:31:33.701137 1585635 out.go:177] * Using Docker driver with root privileges
	I0731 22:31:33.703587 1585635 cni.go:84] Creating CNI manager for ""
	I0731 22:31:33.703608 1585635 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 22:31:33.703620 1585635 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 22:31:33.703718 1585635 start.go:340] cluster config:
	{Name:addons-849486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-849486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:31:33.706140 1585635 out.go:177] * Starting "addons-849486" primary control-plane node in "addons-849486" cluster
	I0731 22:31:33.707944 1585635 cache.go:121] Beginning downloading kic base image for docker with crio
	I0731 22:31:33.709697 1585635 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0731 22:31:33.711636 1585635 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:31:33.711690 1585635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0731 22:31:33.711703 1585635 cache.go:56] Caching tarball of preloaded images
	I0731 22:31:33.711784 1585635 preload.go:172] Found /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0731 22:31:33.711800 1585635 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 22:31:33.712145 1585635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/config.json ...
	I0731 22:31:33.712179 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/config.json: {Name:mk9a181ce5af1abc5c2aaae723da67339e76d270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:33.712298 1585635 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 22:31:33.727715 1585635 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 22:31:33.727831 1585635 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 22:31:33.727855 1585635 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 22:31:33.727864 1585635 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 22:31:33.727872 1585635 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 22:31:33.727877 1585635 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0731 22:31:50.380327 1585635 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0731 22:31:50.380368 1585635 cache.go:194] Successfully downloaded all kic artifacts
	I0731 22:31:50.380427 1585635 start.go:360] acquireMachinesLock for addons-849486: {Name:mk26524a28b5e05c49d38e8337baa6f991516659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:31:50.380549 1585635 start.go:364] duration metric: took 98.075µs to acquireMachinesLock for "addons-849486"
	I0731 22:31:50.380580 1585635 start.go:93] Provisioning new machine with config: &{Name:addons-849486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-849486 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:31:50.380662 1585635 start.go:125] createHost starting for "" (driver="docker")
	I0731 22:31:50.383583 1585635 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0731 22:31:50.383848 1585635 start.go:159] libmachine.API.Create for "addons-849486" (driver="docker")
	I0731 22:31:50.383891 1585635 client.go:168] LocalClient.Create starting
	I0731 22:31:50.384022 1585635 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem
	I0731 22:31:50.616017 1585635 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem
	I0731 22:31:51.403220 1585635 cli_runner.go:164] Run: docker network inspect addons-849486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 22:31:51.418781 1585635 cli_runner.go:211] docker network inspect addons-849486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 22:31:51.418872 1585635 network_create.go:284] running [docker network inspect addons-849486] to gather additional debugging logs...
	I0731 22:31:51.418899 1585635 cli_runner.go:164] Run: docker network inspect addons-849486
	W0731 22:31:51.434073 1585635 cli_runner.go:211] docker network inspect addons-849486 returned with exit code 1
	I0731 22:31:51.434106 1585635 network_create.go:287] error running [docker network inspect addons-849486]: docker network inspect addons-849486: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-849486 not found
	I0731 22:31:51.434126 1585635 network_create.go:289] output of [docker network inspect addons-849486]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-849486 not found
	
	** /stderr **
	I0731 22:31:51.434223 1585635 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 22:31:51.450170 1585635 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001752fa0}
	I0731 22:31:51.450217 1585635 network_create.go:124] attempt to create docker network addons-849486 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0731 22:31:51.450278 1585635 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-849486 addons-849486
	I0731 22:31:51.518582 1585635 network_create.go:108] docker network addons-849486 192.168.49.0/24 created
	I0731 22:31:51.518620 1585635 kic.go:121] calculated static IP "192.168.49.2" for the "addons-849486" container
	I0731 22:31:51.518694 1585635 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 22:31:51.534367 1585635 cli_runner.go:164] Run: docker volume create addons-849486 --label name.minikube.sigs.k8s.io=addons-849486 --label created_by.minikube.sigs.k8s.io=true
	I0731 22:31:51.550715 1585635 oci.go:103] Successfully created a docker volume addons-849486
	I0731 22:31:51.550797 1585635 cli_runner.go:164] Run: docker run --rm --name addons-849486-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-849486 --entrypoint /usr/bin/test -v addons-849486:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 22:31:53.670897 1585635 cli_runner.go:217] Completed: docker run --rm --name addons-849486-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-849486 --entrypoint /usr/bin/test -v addons-849486:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib: (2.120063761s)
	I0731 22:31:53.670929 1585635 oci.go:107] Successfully prepared a docker volume addons-849486
	I0731 22:31:53.670942 1585635 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:31:53.670961 1585635 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 22:31:53.671041 1585635 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-849486:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 22:31:57.953660 1585635 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-849486:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (4.282570705s)
	I0731 22:31:57.953703 1585635 kic.go:203] duration metric: took 4.282738778s to extract preloaded images to volume ...
	W0731 22:31:57.953835 1585635 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 22:31:57.953956 1585635 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 22:31:58.016808 1585635 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-849486 --name addons-849486 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-849486 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-849486 --network addons-849486 --ip 192.168.49.2 --volume addons-849486:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0731 22:31:58.357833 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Running}}
	I0731 22:31:58.384035 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:31:58.405395 1585635 cli_runner.go:164] Run: docker exec addons-849486 stat /var/lib/dpkg/alternatives/iptables
	I0731 22:31:58.465013 1585635 oci.go:144] the created container "addons-849486" has a running status.
	I0731 22:31:58.465042 1585635 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa...
	I0731 22:31:59.092003 1585635 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 22:31:59.117851 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:31:59.149149 1585635 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 22:31:59.149225 1585635 kic_runner.go:114] Args: [docker exec --privileged addons-849486 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 22:31:59.217221 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:31:59.242987 1585635 machine.go:94] provisionDockerMachine start ...
	I0731 22:31:59.243070 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:31:59.260682 1585635 main.go:141] libmachine: Using SSH client type: native
	I0731 22:31:59.260954 1585635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I0731 22:31:59.260963 1585635 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 22:31:59.400956 1585635 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-849486
	
	I0731 22:31:59.401025 1585635 ubuntu.go:169] provisioning hostname "addons-849486"
	I0731 22:31:59.401147 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:31:59.422155 1585635 main.go:141] libmachine: Using SSH client type: native
	I0731 22:31:59.422577 1585635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I0731 22:31:59.422672 1585635 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-849486 && echo "addons-849486" | sudo tee /etc/hostname
	I0731 22:31:59.573922 1585635 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-849486
	
	I0731 22:31:59.574080 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:31:59.591094 1585635 main.go:141] libmachine: Using SSH client type: native
	I0731 22:31:59.591348 1585635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I0731 22:31:59.591365 1585635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-849486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-849486/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-849486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:31:59.725702 1585635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:31:59.725788 1585635 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1579223/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1579223/.minikube}
	I0731 22:31:59.725842 1585635 ubuntu.go:177] setting up certificates
	I0731 22:31:59.725880 1585635 provision.go:84] configureAuth start
	I0731 22:31:59.725975 1585635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-849486
	I0731 22:31:59.743652 1585635 provision.go:143] copyHostCerts
	I0731 22:31:59.743732 1585635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.pem (1082 bytes)
	I0731 22:31:59.743848 1585635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/cert.pem (1123 bytes)
	I0731 22:31:59.743902 1585635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/key.pem (1679 bytes)
	I0731 22:31:59.743948 1585635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem org=jenkins.addons-849486 san=[127.0.0.1 192.168.49.2 addons-849486 localhost minikube]
	I0731 22:32:00.282847 1585635 provision.go:177] copyRemoteCerts
	I0731 22:32:00.283201 1585635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:32:00.283381 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:00.324981 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:00.436508 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 22:32:00.474009 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 22:32:00.508610 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 22:32:00.539121 1585635 provision.go:87] duration metric: took 813.210459ms to configureAuth
	I0731 22:32:00.539152 1585635 ubuntu.go:193] setting minikube options for container-runtime
	I0731 22:32:00.539377 1585635 config.go:182] Loaded profile config "addons-849486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:32:00.539491 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:00.559506 1585635 main.go:141] libmachine: Using SSH client type: native
	I0731 22:32:00.559809 1585635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I0731 22:32:00.559825 1585635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 22:32:00.800269 1585635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 22:32:00.800312 1585635 machine.go:97] duration metric: took 1.557306826s to provisionDockerMachine
	I0731 22:32:00.800324 1585635 client.go:171] duration metric: took 10.416423921s to LocalClient.Create
	I0731 22:32:00.800343 1585635 start.go:167] duration metric: took 10.416495913s to libmachine.API.Create "addons-849486"
	I0731 22:32:00.800391 1585635 start.go:293] postStartSetup for "addons-849486" (driver="docker")
	I0731 22:32:00.800423 1585635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:32:00.800519 1585635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:32:00.800670 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:00.820591 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:00.918581 1585635 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:32:00.921906 1585635 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 22:32:00.921943 1585635 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 22:32:00.921973 1585635 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 22:32:00.921987 1585635 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0731 22:32:00.921999 1585635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1579223/.minikube/addons for local assets ...
	I0731 22:32:00.922082 1585635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1579223/.minikube/files for local assets ...
	I0731 22:32:00.922109 1585635 start.go:296] duration metric: took 121.698256ms for postStartSetup
	I0731 22:32:00.922429 1585635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-849486
	I0731 22:32:00.939252 1585635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/config.json ...
	I0731 22:32:00.939628 1585635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:32:00.939693 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:00.957116 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:01.046193 1585635 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 22:32:01.050758 1585635 start.go:128] duration metric: took 10.670079194s to createHost
	I0731 22:32:01.050784 1585635 start.go:83] releasing machines lock for "addons-849486", held for 10.670221536s
	I0731 22:32:01.050879 1585635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-849486
	I0731 22:32:01.073367 1585635 ssh_runner.go:195] Run: cat /version.json
	I0731 22:32:01.073429 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:01.073530 1585635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 22:32:01.073603 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:01.098312 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:01.110781 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:01.318489 1585635 ssh_runner.go:195] Run: systemctl --version
	I0731 22:32:01.322881 1585635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 22:32:01.466230 1585635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 22:32:01.470564 1585635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:32:01.491616 1585635 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 22:32:01.491694 1585635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:32:01.523532 1585635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 22:32:01.523552 1585635 start.go:495] detecting cgroup driver to use...
	I0731 22:32:01.523585 1585635 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0731 22:32:01.523633 1585635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:32:01.540473 1585635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:32:01.552305 1585635 docker.go:217] disabling cri-docker service (if available) ...
	I0731 22:32:01.552368 1585635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 22:32:01.566067 1585635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 22:32:01.581193 1585635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 22:32:01.673185 1585635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 22:32:01.777561 1585635 docker.go:233] disabling docker service ...
	I0731 22:32:01.777638 1585635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 22:32:01.797497 1585635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 22:32:01.810203 1585635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 22:32:01.914815 1585635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 22:32:02.020469 1585635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 22:32:02.033289 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:32:02.051363 1585635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 22:32:02.051493 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.062424 1585635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 22:32:02.062541 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.073475 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.083339 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.093221 1585635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:32:02.102492 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.112558 1585635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.128684 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.138839 1585635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:32:02.147528 1585635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:32:02.156071 1585635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:32:02.253217 1585635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 22:32:02.383925 1585635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 22:32:02.384019 1585635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 22:32:02.387942 1585635 start.go:563] Will wait 60s for crictl version
	I0731 22:32:02.388026 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:32:02.391317 1585635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:32:02.430786 1585635 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 22:32:02.430912 1585635 ssh_runner.go:195] Run: crio --version
	I0731 22:32:02.470262 1585635 ssh_runner.go:195] Run: crio --version
	I0731 22:32:02.511268 1585635 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0731 22:32:02.513265 1585635 cli_runner.go:164] Run: docker network inspect addons-849486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 22:32:02.529163 1585635 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0731 22:32:02.532739 1585635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:32:02.543485 1585635 kubeadm.go:883] updating cluster {Name:addons-849486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-849486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 22:32:02.543610 1585635 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:32:02.543670 1585635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:32:02.621378 1585635 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 22:32:02.621405 1585635 crio.go:433] Images already preloaded, skipping extraction
	I0731 22:32:02.621464 1585635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:32:02.656117 1585635 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 22:32:02.656137 1585635 cache_images.go:84] Images are preloaded, skipping loading
	I0731 22:32:02.656145 1585635 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0731 22:32:02.656251 1585635 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-849486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-849486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:32:02.656338 1585635 ssh_runner.go:195] Run: crio config
	I0731 22:32:02.705770 1585635 cni.go:84] Creating CNI manager for ""
	I0731 22:32:02.705795 1585635 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 22:32:02.705805 1585635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 22:32:02.705841 1585635 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-849486 NodeName:addons-849486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 22:32:02.705994 1585635 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-849486"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 22:32:02.706070 1585635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:32:02.714917 1585635 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 22:32:02.715010 1585635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 22:32:02.723591 1585635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0731 22:32:02.741776 1585635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:32:02.760650 1585635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0731 22:32:02.778991 1585635 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0731 22:32:02.782294 1585635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:32:02.792929 1585635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:32:02.877740 1585635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:32:02.891735 1585635 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486 for IP: 192.168.49.2
	I0731 22:32:02.891756 1585635 certs.go:194] generating shared ca certs ...
	I0731 22:32:02.891772 1585635 certs.go:226] acquiring lock for ca certs: {Name:mk6ccdabf08b8b9bfa2ad4dfbceb108d85e42085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:02.891908 1585635 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key
	I0731 22:32:03.144057 1585635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt ...
	I0731 22:32:03.144090 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt: {Name:mk017a40da3591fd0208865b47278b382b71fea7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:03.144320 1585635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key ...
	I0731 22:32:03.144335 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key: {Name:mke8ea525ba4233d2e7fbc91d4e136fa0e33fe49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:03.145442 1585635 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key
	I0731 22:32:03.829130 1585635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.crt ...
	I0731 22:32:03.829172 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.crt: {Name:mk5a0f34fdcacd89f0c298d2b42166e20350c428 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:03.829395 1585635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key ...
	I0731 22:32:03.829411 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key: {Name:mkf2af353dde1d471523c82d104c539eb6e2321f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:03.829984 1585635 certs.go:256] generating profile certs ...
	I0731 22:32:03.830056 1585635 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.key
	I0731 22:32:03.830075 1585635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt with IP's: []
	I0731 22:32:04.137106 1585635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt ...
	I0731 22:32:04.137138 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: {Name:mk4f2ef72148f6dd85edf5d60d243b4e64d61e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.137337 1585635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.key ...
	I0731 22:32:04.137351 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.key: {Name:mkda5591463fb8cab8138f91ff275cae5ae73033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.137437 1585635 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key.a4894f25
	I0731 22:32:04.137457 1585635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt.a4894f25 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0731 22:32:04.500598 1585635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt.a4894f25 ...
	I0731 22:32:04.500634 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt.a4894f25: {Name:mk2022dfacca8e76930277a08005fc059318f27d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.501398 1585635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key.a4894f25 ...
	I0731 22:32:04.501425 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key.a4894f25: {Name:mk5ed3e5dbeb85607e558b6f3dc86dc1dc1a1b38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.502061 1585635 certs.go:381] copying /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt.a4894f25 -> /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt
	I0731 22:32:04.502151 1585635 certs.go:385] copying /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key.a4894f25 -> /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key
	I0731 22:32:04.502207 1585635 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.key
	I0731 22:32:04.502232 1585635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.crt with IP's: []
	I0731 22:32:04.964226 1585635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.crt ...
	I0731 22:32:04.964261 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.crt: {Name:mka1f7ab818bd63f572762f7fa1e41c03bf06e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.964859 1585635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.key ...
	I0731 22:32:04.964880 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.key: {Name:mk760ddc11a470c36d6494f0b44f6af495dbbc35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.965639 1585635 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 22:32:04.965689 1585635 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem (1082 bytes)
	I0731 22:32:04.965722 1585635 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem (1123 bytes)
	I0731 22:32:04.965753 1585635 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem (1679 bytes)
	I0731 22:32:04.966426 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:32:04.991331 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 22:32:05.019290 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:32:05.046812 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 22:32:05.072561 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 22:32:05.099620 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 22:32:05.126498 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:32:05.152268 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:32:05.179127 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:32:05.204360 1585635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 22:32:05.223265 1585635 ssh_runner.go:195] Run: openssl version
	I0731 22:32:05.228952 1585635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:32:05.238846 1585635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:32:05.242646 1585635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:32:05.242715 1585635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:32:05.249842 1585635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:32:05.259553 1585635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:32:05.263159 1585635 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:32:05.263228 1585635 kubeadm.go:392] StartCluster: {Name:addons-849486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-849486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:32:05.263367 1585635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 22:32:05.263432 1585635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 22:32:05.318630 1585635 cri.go:89] found id: ""
	I0731 22:32:05.318751 1585635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 22:32:05.329944 1585635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 22:32:05.339184 1585635 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0731 22:32:05.339314 1585635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 22:32:05.350964 1585635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 22:32:05.351032 1585635 kubeadm.go:157] found existing configuration files:
	
	I0731 22:32:05.351108 1585635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 22:32:05.360969 1585635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 22:32:05.361082 1585635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 22:32:05.370069 1585635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 22:32:05.380250 1585635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 22:32:05.380372 1585635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 22:32:05.390340 1585635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 22:32:05.400328 1585635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 22:32:05.400446 1585635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 22:32:05.410716 1585635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 22:32:05.421313 1585635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 22:32:05.421389 1585635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 22:32:05.430096 1585635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 22:32:05.478756 1585635 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 22:32:05.479025 1585635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 22:32:05.519973 1585635 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0731 22:32:05.520131 1585635 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-aws
	I0731 22:32:05.520210 1585635 kubeadm.go:310] OS: Linux
	I0731 22:32:05.520286 1585635 kubeadm.go:310] CGROUPS_CPU: enabled
	I0731 22:32:05.520368 1585635 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0731 22:32:05.520450 1585635 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0731 22:32:05.520530 1585635 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0731 22:32:05.520608 1585635 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0731 22:32:05.520690 1585635 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0731 22:32:05.520768 1585635 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0731 22:32:05.520847 1585635 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0731 22:32:05.520926 1585635 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0731 22:32:05.592372 1585635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 22:32:05.592589 1585635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 22:32:05.592734 1585635 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 22:32:05.828080 1585635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 22:32:05.832068 1585635 out.go:204]   - Generating certificates and keys ...
	I0731 22:32:05.832277 1585635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 22:32:05.832419 1585635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 22:32:06.031019 1585635 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 22:32:06.382129 1585635 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 22:32:06.607331 1585635 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 22:32:06.902455 1585635 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 22:32:07.648129 1585635 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 22:32:07.648340 1585635 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-849486 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 22:32:08.964997 1585635 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 22:32:08.965364 1585635 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-849486 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 22:32:09.356903 1585635 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 22:32:09.878965 1585635 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 22:32:10.044142 1585635 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 22:32:10.044552 1585635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 22:32:10.425788 1585635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 22:32:10.657351 1585635 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 22:32:10.878485 1585635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 22:32:11.715065 1585635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 22:32:12.238190 1585635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 22:32:12.240742 1585635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 22:32:12.243742 1585635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 22:32:12.246269 1585635 out.go:204]   - Booting up control plane ...
	I0731 22:32:12.246380 1585635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 22:32:12.246461 1585635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 22:32:12.246986 1585635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 22:32:12.257091 1585635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 22:32:12.258815 1585635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 22:32:12.258975 1585635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 22:32:12.354413 1585635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 22:32:12.354506 1585635 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 22:32:13.356027 1585635 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001702112s
	I0731 22:32:13.356115 1585635 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 22:32:19.357411 1585635 kubeadm.go:310] [api-check] The API server is healthy after 6.001342434s
	I0731 22:32:19.378307 1585635 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 22:32:19.390403 1585635 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 22:32:19.417717 1585635 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 22:32:19.417941 1585635 kubeadm.go:310] [mark-control-plane] Marking the node addons-849486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 22:32:19.428297 1585635 kubeadm.go:310] [bootstrap-token] Using token: 3yv69l.mr5spb70c478inpl
	I0731 22:32:19.430472 1585635 out.go:204]   - Configuring RBAC rules ...
	I0731 22:32:19.430621 1585635 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 22:32:19.441056 1585635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 22:32:19.448280 1585635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 22:32:19.451734 1585635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 22:32:19.454948 1585635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 22:32:19.458353 1585635 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 22:32:19.764407 1585635 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 22:32:20.211704 1585635 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 22:32:20.764079 1585635 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 22:32:20.765410 1585635 kubeadm.go:310] 
	I0731 22:32:20.765484 1585635 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 22:32:20.765496 1585635 kubeadm.go:310] 
	I0731 22:32:20.765572 1585635 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 22:32:20.765579 1585635 kubeadm.go:310] 
	I0731 22:32:20.765603 1585635 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 22:32:20.765663 1585635 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 22:32:20.765716 1585635 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 22:32:20.765724 1585635 kubeadm.go:310] 
	I0731 22:32:20.765776 1585635 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 22:32:20.765782 1585635 kubeadm.go:310] 
	I0731 22:32:20.765828 1585635 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 22:32:20.765836 1585635 kubeadm.go:310] 
	I0731 22:32:20.765886 1585635 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 22:32:20.765961 1585635 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 22:32:20.766030 1585635 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 22:32:20.766038 1585635 kubeadm.go:310] 
	I0731 22:32:20.766119 1585635 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 22:32:20.766195 1585635 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 22:32:20.766203 1585635 kubeadm.go:310] 
	I0731 22:32:20.766284 1585635 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3yv69l.mr5spb70c478inpl \
	I0731 22:32:20.766386 1585635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6aeac36715e45fc93d88e018ede78e85fe5c7ed540db2ea7e85e78caf89de8d9 \
	I0731 22:32:20.766410 1585635 kubeadm.go:310] 	--control-plane 
	I0731 22:32:20.766418 1585635 kubeadm.go:310] 
	I0731 22:32:20.766517 1585635 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 22:32:20.766527 1585635 kubeadm.go:310] 
	I0731 22:32:20.766606 1585635 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3yv69l.mr5spb70c478inpl \
	I0731 22:32:20.766707 1585635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6aeac36715e45fc93d88e018ede78e85fe5c7ed540db2ea7e85e78caf89de8d9 
	I0731 22:32:20.769407 1585635 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-aws\n", err: exit status 1
	I0731 22:32:20.769524 1585635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 22:32:20.769546 1585635 cni.go:84] Creating CNI manager for ""
	I0731 22:32:20.769558 1585635 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 22:32:20.772007 1585635 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 22:32:20.774312 1585635 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 22:32:20.778116 1585635 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 22:32:20.778172 1585635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0731 22:32:20.797522 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 22:32:21.076907 1585635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 22:32:21.077037 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:21.077157 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-849486 minikube.k8s.io/updated_at=2024_07_31T22_32_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=addons-849486 minikube.k8s.io/primary=true
	I0731 22:32:21.085488 1585635 ops.go:34] apiserver oom_adj: -16
	I0731 22:32:21.206192 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:21.706478 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:22.206715 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:22.706393 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:23.206724 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:23.707089 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:24.206489 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:24.706888 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:25.206323 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:25.707128 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:26.206522 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:26.706266 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:27.206558 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:27.706844 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:28.206638 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:28.706424 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:29.206677 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:29.706655 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:30.206950 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:30.706366 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:31.207218 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:31.706707 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:32.206876 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:32.707145 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:33.207037 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:33.706961 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:33.797367 1585635 kubeadm.go:1113] duration metric: took 12.720372697s to wait for elevateKubeSystemPrivileges
	I0731 22:32:33.797407 1585635 kubeadm.go:394] duration metric: took 28.534200707s to StartCluster
	I0731 22:32:33.797426 1585635 settings.go:142] acquiring lock: {Name:mk3c0c3b857f6d982767b7eb95481d3e4843baa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:33.797541 1585635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 22:32:33.797913 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/kubeconfig: {Name:mkfef6e38d1ebcc45fcbbe766a2ae2945f7bd392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:33.798103 1585635 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:32:33.798203 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 22:32:33.798456 1585635 config.go:182] Loaded profile config "addons-849486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:32:33.798495 1585635 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 22:32:33.798571 1585635 addons.go:69] Setting yakd=true in profile "addons-849486"
	I0731 22:32:33.798592 1585635 addons.go:234] Setting addon yakd=true in "addons-849486"
	I0731 22:32:33.798614 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.799063 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.799199 1585635 addons.go:69] Setting inspektor-gadget=true in profile "addons-849486"
	I0731 22:32:33.799218 1585635 addons.go:234] Setting addon inspektor-gadget=true in "addons-849486"
	I0731 22:32:33.799241 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.799589 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.799901 1585635 addons.go:69] Setting metrics-server=true in profile "addons-849486"
	I0731 22:32:33.799960 1585635 addons.go:234] Setting addon metrics-server=true in "addons-849486"
	I0731 22:32:33.799997 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.800416 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.801826 1585635 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-849486"
	I0731 22:32:33.801857 1585635 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-849486"
	I0731 22:32:33.801887 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.802264 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.803496 1585635 addons.go:69] Setting cloud-spanner=true in profile "addons-849486"
	I0731 22:32:33.803539 1585635 addons.go:234] Setting addon cloud-spanner=true in "addons-849486"
	I0731 22:32:33.805393 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.806361 1585635 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-849486"
	I0731 22:32:33.806426 1585635 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-849486"
	I0731 22:32:33.806460 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.806834 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.807114 1585635 addons.go:69] Setting registry=true in profile "addons-849486"
	I0731 22:32:33.807150 1585635 addons.go:234] Setting addon registry=true in "addons-849486"
	I0731 22:32:33.807176 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.807600 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.811465 1585635 addons.go:69] Setting default-storageclass=true in profile "addons-849486"
	I0731 22:32:33.811556 1585635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-849486"
	I0731 22:32:33.811906 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.823330 1585635 addons.go:69] Setting storage-provisioner=true in profile "addons-849486"
	I0731 22:32:33.823389 1585635 addons.go:234] Setting addon storage-provisioner=true in "addons-849486"
	I0731 22:32:33.823547 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.823988 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.826550 1585635 addons.go:69] Setting gcp-auth=true in profile "addons-849486"
	I0731 22:32:33.826628 1585635 mustload.go:65] Loading cluster: addons-849486
	I0731 22:32:33.826827 1585635 config.go:182] Loaded profile config "addons-849486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:32:33.827119 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.836753 1585635 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-849486"
	I0731 22:32:33.836807 1585635 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-849486"
	I0731 22:32:33.837146 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.837281 1585635 addons.go:69] Setting ingress=true in profile "addons-849486"
	I0731 22:32:33.837309 1585635 addons.go:234] Setting addon ingress=true in "addons-849486"
	I0731 22:32:33.837348 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.837706 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.856144 1585635 addons.go:69] Setting ingress-dns=true in profile "addons-849486"
	I0731 22:32:33.856196 1585635 addons.go:234] Setting addon ingress-dns=true in "addons-849486"
	I0731 22:32:33.856243 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.856703 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.867285 1585635 addons.go:69] Setting volcano=true in profile "addons-849486"
	I0731 22:32:33.867333 1585635 addons.go:234] Setting addon volcano=true in "addons-849486"
	I0731 22:32:33.867372 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.867815 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.883202 1585635 addons.go:69] Setting volumesnapshots=true in profile "addons-849486"
	I0731 22:32:33.883298 1585635 addons.go:234] Setting addon volumesnapshots=true in "addons-849486"
	I0731 22:32:33.883368 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.884601 1585635 out.go:177] * Verifying Kubernetes components...
	I0731 22:32:33.893878 1585635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:32:33.927873 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.951805 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.975874 1585635 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 22:32:33.977750 1585635 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 22:32:33.977811 1585635 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 22:32:33.977907 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:33.994407 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 22:32:33.997276 1585635 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 22:32:33.998919 1585635 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 22:32:33.999023 1585635 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 22:32:33.999471 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 22:32:33.999488 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 22:32:33.999561 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:33.999772 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 22:32:34.002222 1585635 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 22:32:34.002244 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 22:32:34.002314 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.023776 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 22:32:34.026094 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 22:32:34.028229 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 22:32:34.030323 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 22:32:34.032822 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 22:32:34.035068 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 22:32:34.037390 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 22:32:34.037425 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 22:32:34.037506 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.039156 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:34.043166 1585635 addons.go:234] Setting addon default-storageclass=true in "addons-849486"
	I0731 22:32:34.043251 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:34.043729 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:34.051923 1585635 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 22:32:34.051950 1585635 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 22:32:34.052026 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.069351 1585635 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 22:32:34.069413 1585635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 22:32:34.071845 1585635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:32:34.071870 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 22:32:34.071952 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.078672 1585635 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 22:32:34.081054 1585635 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 22:32:34.081078 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 22:32:34.081165 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.090112 1585635 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 22:32:34.093595 1585635 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 22:32:34.093621 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 22:32:34.093688 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.128027 1585635 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-849486"
	I0731 22:32:34.128069 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:34.128484 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	W0731 22:32:34.157774 1585635 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0731 22:32:34.201020 1585635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 22:32:34.203746 1585635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 22:32:34.206023 1585635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 22:32:34.209680 1585635 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 22:32:34.209742 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 22:32:34.209823 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.236162 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 22:32:34.242043 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 22:32:34.242073 1585635 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 22:32:34.242146 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.256155 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.260528 1585635 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 22:32:34.265516 1585635 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 22:32:34.265536 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 22:32:34.265601 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.267359 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.268059 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.270811 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.281631 1585635 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 22:32:34.281653 1585635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 22:32:34.281730 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.302517 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.319770 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.320277 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.320680 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.323518 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 22:32:34.323697 1585635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:32:34.331362 1585635 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 22:32:34.333354 1585635 out.go:177]   - Using image docker.io/busybox:stable
	I0731 22:32:34.337085 1585635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 22:32:34.337161 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 22:32:34.337227 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.365096 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.389868 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.407286 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	W0731 22:32:34.413237 1585635 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0731 22:32:34.413267 1585635 retry.go:31] will retry after 290.879268ms: ssh: handshake failed: EOF
	I0731 22:32:34.420781 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.421198 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	W0731 22:32:34.422449 1585635 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0731 22:32:34.422470 1585635 retry.go:31] will retry after 184.831013ms: ssh: handshake failed: EOF
	I0731 22:32:34.614002 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 22:32:34.614029 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 22:32:34.673256 1585635 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 22:32:34.673283 1585635 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 22:32:34.681953 1585635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 22:32:34.681982 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 22:32:34.686912 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 22:32:34.702030 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 22:32:34.702058 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 22:32:34.707138 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 22:32:34.707164 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 22:32:34.753241 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 22:32:34.753266 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 22:32:34.756662 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:32:34.766426 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 22:32:34.782449 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 22:32:34.782481 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 22:32:34.796896 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 22:32:34.800880 1585635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 22:32:34.800907 1585635 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 22:32:34.864007 1585635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 22:32:34.864035 1585635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 22:32:34.865845 1585635 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 22:32:34.865881 1585635 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 22:32:34.879524 1585635 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 22:32:34.879565 1585635 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 22:32:34.890594 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 22:32:34.962585 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 22:32:34.962629 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 22:32:34.994175 1585635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 22:32:34.994204 1585635 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 22:32:35.029336 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 22:32:35.029374 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 22:32:35.041001 1585635 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 22:32:35.041034 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 22:32:35.064773 1585635 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 22:32:35.064803 1585635 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 22:32:35.140033 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 22:32:35.143482 1585635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 22:32:35.143509 1585635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 22:32:35.172282 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 22:32:35.200211 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 22:32:35.200257 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 22:32:35.205554 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 22:32:35.205592 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 22:32:35.240681 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 22:32:35.257775 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 22:32:35.285336 1585635 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 22:32:35.285370 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 22:32:35.304250 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 22:32:35.304281 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 22:32:35.318376 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 22:32:35.318403 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 22:32:35.364038 1585635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 22:32:35.364066 1585635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 22:32:35.458941 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 22:32:35.458967 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 22:32:35.466795 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 22:32:35.471415 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 22:32:35.471442 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 22:32:35.535626 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 22:32:35.535654 1585635 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 22:32:35.637242 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 22:32:35.643247 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 22:32:35.643276 1585635 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 22:32:35.695922 1585635 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 22:32:35.695948 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 22:32:35.863818 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 22:32:35.863852 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 22:32:35.925947 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 22:32:36.052850 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 22:32:36.052881 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 22:32:36.186774 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 22:32:36.186806 1585635 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 22:32:36.307358 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 22:32:36.826362 1585635 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.502637533s)
	I0731 22:32:36.827377 1585635 node_ready.go:35] waiting up to 6m0s for node "addons-849486" to be "Ready" ...
	I0731 22:32:36.827599 1585635 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.504059706s)
	I0731 22:32:36.827619 1585635 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0731 22:32:36.917280 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.230329672s)
	I0731 22:32:37.635839 1585635 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-849486" context rescaled to 1 replicas
	I0731 22:32:38.971338 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:39.608209 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.851496424s)
	I0731 22:32:40.730281 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.963816204s)
	I0731 22:32:40.730373 1585635 addons.go:475] Verifying addon ingress=true in "addons-849486"
	I0731 22:32:40.730796 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.933872607s)
	I0731 22:32:40.730979 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.84035121s)
	I0731 22:32:40.731031 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.590974926s)
	I0731 22:32:40.731087 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.558779907s)
	I0731 22:32:40.731096 1585635 addons.go:475] Verifying addon metrics-server=true in "addons-849486"
	I0731 22:32:40.731151 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.490416885s)
	I0731 22:32:40.731160 1585635 addons.go:475] Verifying addon registry=true in "addons-849486"
	I0731 22:32:40.731523 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.473719263s)
	I0731 22:32:40.731711 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.264880426s)
	I0731 22:32:40.731839 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.094563053s)
	I0731 22:32:40.731922 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.805898409s)
	W0731 22:32:40.732689 1585635 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 22:32:40.732716 1585635 retry.go:31] will retry after 192.341389ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 22:32:40.733872 1585635 out.go:177] * Verifying ingress addon...
	I0731 22:32:40.735147 1585635 out.go:177] * Verifying registry addon...
	I0731 22:32:40.735166 1585635 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-849486 service yakd-dashboard -n yakd-dashboard
	
	I0731 22:32:40.738239 1585635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 22:32:40.739205 1585635 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 22:32:40.759027 1585635 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 22:32:40.759090 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:40.765744 1585635 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 22:32:40.765818 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0731 22:32:40.783684 1585635 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0731 22:32:40.925870 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 22:32:41.262167 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:41.264940 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:41.386966 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:41.564556 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.257136217s)
	I0731 22:32:41.564648 1585635 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-849486"
	I0731 22:32:41.567103 1585635 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 22:32:41.571370 1585635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 22:32:41.622859 1585635 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 22:32:41.622933 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:41.744383 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:41.745012 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:42.077278 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:42.252256 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:42.255666 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:42.318415 1585635 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 22:32:42.318505 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:42.352276 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:42.481330 1585635 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 22:32:42.515983 1585635 addons.go:234] Setting addon gcp-auth=true in "addons-849486"
	I0731 22:32:42.516037 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:42.516494 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:42.544941 1585635 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 22:32:42.545014 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:42.568744 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:42.577260 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:42.743056 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:42.745599 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:43.077148 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:43.245121 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:43.246131 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:43.576555 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:43.769034 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:43.769994 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:43.844871 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:44.076871 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:44.177742 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.251828028s)
	I0731 22:32:44.177855 1585635 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.632877013s)
	I0731 22:32:44.180295 1585635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 22:32:44.182445 1585635 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 22:32:44.184829 1585635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 22:32:44.184884 1585635 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 22:32:44.212759 1585635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 22:32:44.212837 1585635 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 22:32:44.231550 1585635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 22:32:44.231669 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 22:32:44.245781 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:44.246221 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:44.256311 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 22:32:44.576248 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:44.753951 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:44.755250 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:44.877627 1585635 addons.go:475] Verifying addon gcp-auth=true in "addons-849486"
	I0731 22:32:44.880009 1585635 out.go:177] * Verifying gcp-auth addon...
	I0731 22:32:44.883172 1585635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 22:32:44.887849 1585635 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 22:32:44.887919 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:45.090712 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:45.247135 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:45.249137 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:45.389389 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:45.576034 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:45.743912 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:45.744141 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:45.887236 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:46.076234 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:46.242659 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:46.244638 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:46.330631 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:46.387466 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:46.575637 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:46.744520 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:46.745383 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:46.887579 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:47.075957 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:47.243562 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:47.243715 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:47.386486 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:47.575429 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:47.742886 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:47.743102 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:47.886745 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:48.076022 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:48.243277 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:48.244229 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:48.386624 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:48.575845 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:48.742319 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:48.744367 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:48.830378 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:48.886983 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:49.076063 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:49.242544 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:49.243638 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:49.386885 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:49.575976 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:49.743622 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:49.745009 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:49.886723 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:50.075539 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:50.243289 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:50.244039 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:50.386415 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:50.575926 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:50.742467 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:50.743541 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:50.831310 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:50.887654 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:51.075487 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:51.242267 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:51.243398 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:51.386228 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:51.576435 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:51.742838 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:51.743277 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:51.887435 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:52.076148 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:52.255669 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:52.256630 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:52.387128 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:52.575760 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:52.742006 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:52.743348 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:52.887549 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:53.076613 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:53.242422 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:53.243176 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:53.330705 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:53.386479 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:53.575225 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:53.742284 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:53.743497 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:53.886508 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:54.075472 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:54.244402 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:54.243675 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:54.387198 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:54.575856 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:54.743038 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:54.744293 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:54.886424 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:55.075975 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:55.244357 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:55.244381 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:55.386862 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:55.575991 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:55.743589 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:55.744656 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:55.830763 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:55.886517 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:56.075701 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:56.243400 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:56.244101 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:56.387194 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:56.576086 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:56.742513 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:56.743796 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:56.887311 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:57.076096 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:57.242051 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:57.243212 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:57.386682 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:57.576453 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:57.742357 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:57.744079 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:57.831382 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:57.886971 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:58.076338 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:58.242299 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:58.243475 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:58.387433 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:58.575633 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:58.742103 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:58.742999 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:58.886662 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:59.075576 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:59.244900 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:59.246342 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:59.387074 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:59.576231 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:59.743378 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:59.742470 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:59.887345 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:00.087879 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:00.273190 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:00.273970 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:00.337689 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:00.387618 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:00.575811 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:00.743059 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:00.744228 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:00.886866 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:01.075840 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:01.243283 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:01.244117 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:01.387279 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:01.576499 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:01.744588 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:01.744728 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:01.886963 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:02.075985 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:02.242779 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:02.243816 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:02.386346 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:02.576019 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:02.744680 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:02.746178 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:02.831029 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:02.886942 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:03.076206 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:03.242979 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:03.244026 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:03.400313 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:03.575961 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:03.745510 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:03.746689 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:03.887426 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:04.076155 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:04.244568 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:04.245955 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:04.386830 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:04.576095 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:04.742032 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:04.742861 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:04.831212 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:04.886527 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:05.075985 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:05.243598 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:05.244272 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:05.387347 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:05.575338 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:05.743181 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:05.743911 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:05.887476 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:06.076486 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:06.242708 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:06.244169 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:06.386904 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:06.575824 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:06.743190 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:06.744352 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:06.886948 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:07.075642 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:07.243175 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:07.245563 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:07.330643 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:07.386491 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:07.579509 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:07.743657 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:07.744485 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:07.887276 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:08.075610 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:08.243705 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:08.244636 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:08.386884 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:08.575851 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:08.743386 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:08.744102 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:08.886775 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:09.076849 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:09.242494 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:09.243894 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:09.331209 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:09.387698 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:09.575800 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:09.743178 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:09.744020 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:09.886411 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:10.075588 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:10.242910 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:10.243518 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:10.387150 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:10.576509 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:10.742437 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:10.744196 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:10.886860 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:11.077525 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:11.243246 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:11.244089 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:11.332639 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:11.386948 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:11.576090 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:11.745081 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:11.747185 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:11.887156 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:12.076841 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:12.244185 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:12.244480 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:12.387200 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:12.576262 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:12.743793 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:12.745368 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:12.887560 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:13.075825 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:13.243062 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:13.243792 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:13.386950 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:13.575512 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:13.743698 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:13.744601 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:13.831057 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:13.887284 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:14.075720 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:14.243561 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:14.243997 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:14.386532 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:14.575816 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:14.743331 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:14.744045 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:14.887434 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:15.075930 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:15.243695 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:15.244257 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:15.386880 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:15.576057 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:15.742325 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:15.743569 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:15.887364 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:16.076223 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:16.242778 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:16.243580 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:16.330601 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:16.387101 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:16.575683 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:16.741916 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:16.743935 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:16.887176 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:17.076280 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:17.242445 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:17.243460 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:17.386437 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:17.576205 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:17.741908 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:17.743366 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:17.886588 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:18.076795 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:18.242934 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:18.243862 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:18.331565 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:18.387366 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:18.575888 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:18.743095 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:18.743620 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:18.886759 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:19.075983 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:19.243492 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:19.244248 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:19.386384 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:19.575306 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:19.742296 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:19.743036 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:19.886512 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:20.095710 1585635 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 22:33:20.095746 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:20.272166 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:20.272761 1585635 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 22:33:20.272774 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:20.396236 1585635 node_ready.go:49] node "addons-849486" has status "Ready":"True"
	I0731 22:33:20.396272 1585635 node_ready.go:38] duration metric: took 43.568862212s for node "addons-849486" to be "Ready" ...
	I0731 22:33:20.396284 1585635 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:33:20.460926 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:20.465502 1585635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qv2pm" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:20.627876 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:20.806883 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:20.807874 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:20.897095 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:21.077626 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:21.247125 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:21.249917 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:21.386977 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:21.576397 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:21.742968 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:21.744283 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:21.887461 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:21.974482 1585635 pod_ready.go:92] pod "coredns-7db6d8ff4d-qv2pm" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:21.974507 1585635 pod_ready.go:81] duration metric: took 1.508971834s for pod "coredns-7db6d8ff4d-qv2pm" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.974533 1585635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.980872 1585635 pod_ready.go:92] pod "etcd-addons-849486" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:21.980896 1585635 pod_ready.go:81] duration metric: took 6.356016ms for pod "etcd-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.980910 1585635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.987115 1585635 pod_ready.go:92] pod "kube-apiserver-addons-849486" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:21.987147 1585635 pod_ready.go:81] duration metric: took 6.229673ms for pod "kube-apiserver-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.987158 1585635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.992769 1585635 pod_ready.go:92] pod "kube-controller-manager-addons-849486" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:21.992794 1585635 pod_ready.go:81] duration metric: took 5.628051ms for pod "kube-controller-manager-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.992806 1585635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mxw62" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.998987 1585635 pod_ready.go:92] pod "kube-proxy-mxw62" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:21.999021 1585635 pod_ready.go:81] duration metric: took 6.207208ms for pod "kube-proxy-mxw62" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.999031 1585635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:22.080257 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:22.245906 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:22.247278 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:22.370669 1585635 pod_ready.go:92] pod "kube-scheduler-addons-849486" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:22.370696 1585635 pod_ready.go:81] duration metric: took 371.657346ms for pod "kube-scheduler-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:22.370708 1585635 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:22.387294 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:22.577295 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:22.743926 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:22.746520 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:22.887355 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:23.077307 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:23.244498 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:23.247478 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:23.389515 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:23.580542 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:23.751087 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:23.752014 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:23.887061 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:24.084325 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:24.253062 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:24.254305 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:24.378689 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:24.419268 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:24.578345 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:24.747327 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:24.748370 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:24.888070 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:25.080909 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:25.243772 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:25.245844 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:25.387252 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:25.576849 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:25.743092 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:25.744783 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:25.886616 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:26.078207 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:26.245273 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:26.245784 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:26.387173 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:26.577887 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:26.744918 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:26.752885 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:26.877086 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:26.887302 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:27.078036 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:27.243523 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:27.245357 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:27.386897 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:27.578774 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:27.744424 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:27.744799 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:27.887110 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:28.078521 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:28.243167 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:28.245480 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:28.387114 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:28.577475 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:28.744241 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:28.745120 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:28.887164 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:29.080269 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:29.246372 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:29.247455 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:29.381194 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:29.388617 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:29.582722 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:29.743822 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:29.748171 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:29.888787 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:30.078620 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:30.244281 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:30.245613 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:30.387214 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:30.576633 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:30.743549 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:30.744903 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:30.886501 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:31.077652 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:31.244901 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:31.245554 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:31.387121 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:31.577010 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:31.743759 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:31.747394 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:31.876912 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:31.887678 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:32.077050 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:32.244646 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:32.245875 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:32.391848 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:32.577464 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:32.744055 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:32.745355 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:32.887249 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:33.077781 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:33.243618 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:33.246059 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:33.386794 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:33.576870 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:33.746127 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:33.748896 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:33.878101 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:33.888168 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:34.084976 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:34.244939 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:34.249516 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:34.388026 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:34.578277 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:34.743193 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:34.747468 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:34.888315 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:35.083949 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:35.245861 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:35.246727 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:35.387305 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:35.593815 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:35.747984 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:35.748986 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:35.887755 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:36.078258 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:36.244080 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:36.245289 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:36.377424 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:36.386843 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:36.579585 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:36.757645 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:36.759630 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:36.886308 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:37.077329 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:37.243782 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:37.245017 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:37.386846 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:37.580954 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:37.745280 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:37.746815 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:37.887312 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:38.078610 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:38.243602 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:38.244574 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:38.387100 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:38.577481 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:38.744334 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:38.745285 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:38.876027 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:38.887341 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:39.082444 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:39.246206 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:39.246850 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:39.387153 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:39.577282 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:39.762239 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:39.766300 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:39.890739 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:40.081500 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:40.255171 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:40.256862 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:40.394848 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:40.578235 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:40.757451 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:40.758364 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:40.879216 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:40.886521 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:41.086198 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:41.247832 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:41.249566 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:41.388319 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:41.579938 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:41.746386 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:41.749680 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:41.888087 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:42.099034 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:42.266590 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:42.270920 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:42.409245 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:42.577777 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:42.758638 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:42.760338 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:42.882459 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:42.887872 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:43.078741 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:43.249823 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:43.251616 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:43.390288 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:43.578025 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:43.745801 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:43.747420 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:43.887219 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:44.077598 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:44.243842 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:44.247979 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:44.386914 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:44.578731 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:44.745658 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:44.747530 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:44.886897 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:45.083802 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:45.244975 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:45.246087 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:45.377320 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:45.387475 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:45.577783 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:45.743519 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:45.745987 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:45.887224 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:46.077779 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:46.244599 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:46.245361 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:46.386801 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:46.578011 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:46.743926 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:46.744775 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:46.888676 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:47.077405 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:47.243705 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:47.248275 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:47.387013 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:47.578263 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:47.746441 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:47.746699 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:47.891906 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:47.892804 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:48.077962 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:48.246465 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:48.249008 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:48.386748 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:48.577898 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:48.753817 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:48.755514 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:48.887274 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:49.077991 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:49.243959 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:49.245377 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:49.387024 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:49.577246 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:49.744021 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:49.744449 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:49.887214 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:50.076850 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:50.249775 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:50.250084 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:50.395278 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:50.407223 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:50.578870 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:50.746257 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:50.747170 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:50.887341 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:51.079443 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:51.249375 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:51.255737 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:51.387901 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:51.577271 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:51.744139 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:51.747638 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:51.905706 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:52.077754 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:52.244108 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:52.245651 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:52.387550 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:52.577451 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:52.744650 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:52.745692 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:52.877516 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:52.889026 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:53.079252 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:53.244853 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:53.246222 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:53.390921 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:53.577242 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:53.745558 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:53.745778 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:53.887893 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:54.081987 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:54.243628 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:54.245938 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:54.388640 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:54.578026 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:54.749012 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:54.754160 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:54.879380 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:54.888296 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:55.078667 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:55.245214 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:55.246513 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:55.387078 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:55.578115 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:55.745488 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:55.748318 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:55.887164 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:56.095720 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:56.244702 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:56.247344 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:56.387431 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:56.580411 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:56.742812 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:56.745754 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:56.887178 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:57.078023 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:57.243212 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:57.244502 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:57.376179 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:57.386721 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:57.576694 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:57.742776 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:57.744843 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:57.886977 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:58.078118 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:58.245998 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:58.246682 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:58.386621 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:58.579787 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:58.745252 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:58.746683 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:58.887342 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:59.078292 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:59.244286 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:59.247592 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:59.378413 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:59.387145 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:59.578800 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:59.745726 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:59.748974 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:59.886682 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:00.103141 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:00.360787 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:00.363278 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:00.407092 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:00.578874 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:00.747020 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:00.748324 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:00.887554 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:01.081192 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:01.249354 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:01.250275 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:01.379636 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:01.389170 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:01.577439 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:01.743226 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:01.744329 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:01.886815 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:02.077273 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:02.243186 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:02.244753 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:02.386406 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:02.577224 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:02.743960 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:02.744663 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:02.887401 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:03.078809 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:03.252578 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:03.253308 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:03.387405 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:03.577870 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:03.745659 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:03.747052 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:03.880391 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:03.888620 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:04.077598 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:04.265932 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:04.275088 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:04.391130 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:04.577893 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:04.747491 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:04.749681 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:04.888781 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:05.079549 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:05.245548 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:05.250589 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:05.390843 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:05.578271 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:05.746769 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:05.748129 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:05.887525 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:06.077692 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:06.244508 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:06.246244 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:06.377529 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:06.387112 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:06.578014 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:06.744378 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:06.745231 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:06.886348 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:07.087949 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:07.244094 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:07.249364 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:07.386605 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:07.577593 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:07.743959 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:07.753452 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:07.887784 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:08.079826 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:08.246408 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:08.249241 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:08.382278 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:08.391012 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:08.582697 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:08.748872 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:08.750227 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:08.887277 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:09.078473 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:09.259490 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:09.259856 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:09.386845 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:09.579861 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:09.753611 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:09.755559 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:09.892274 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:10.078945 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:10.246135 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:10.250377 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:10.387078 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:10.582028 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:10.759973 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:10.764765 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:10.877904 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:10.887396 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:11.078420 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:11.245171 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:11.250738 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:11.386725 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:11.579585 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:11.744153 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:11.744694 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:11.887788 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:12.078732 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:12.243939 1585635 kapi.go:107] duration metric: took 1m31.50569722s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 22:34:12.244850 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:12.386822 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:12.576558 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:12.743462 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:12.878041 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:12.890117 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:13.078945 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:13.244745 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:13.389814 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:13.577580 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:13.745491 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:13.887135 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:14.077444 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:14.244445 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:14.387690 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:14.578445 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:14.744503 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:14.894823 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:14.902050 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:15.078810 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:15.249215 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:15.408789 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:15.579618 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:15.744159 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:15.887958 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:16.077378 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:16.244254 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:16.387034 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:16.578252 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:16.746126 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:16.887598 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:17.094901 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:17.244349 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:17.379366 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:17.393075 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:17.577646 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:17.744812 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:17.891206 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:18.078720 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:18.244191 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:18.388797 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:18.577646 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:18.744548 1585635 kapi.go:107] duration metric: took 1m38.005338482s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 22:34:18.887930 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:19.078019 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:19.382525 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:19.388757 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:19.577915 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:19.886686 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:20.084112 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:20.401373 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:20.581997 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:20.886944 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:21.077623 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:21.393946 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:21.576996 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:21.876730 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:21.886784 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:22.077355 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:22.387496 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:22.586340 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:22.887451 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:23.079870 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:23.390044 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:23.576931 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:23.877143 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:23.886639 1585635 kapi.go:107] duration metric: took 1m39.003466373s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 22:34:23.889356 1585635 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-849486 cluster.
	I0731 22:34:23.891944 1585635 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 22:34:23.894273 1585635 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 22:34:24.078145 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:24.579009 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:25.077843 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:25.578009 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:26.076640 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:26.376624 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:26.577495 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:27.077960 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:27.577263 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:28.076730 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:28.376693 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:28.577671 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:29.077480 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:29.577326 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:30.077643 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:30.379062 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:30.576665 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:31.080045 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:31.577524 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:32.078310 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:32.577531 1585635 kapi.go:107] duration metric: took 1m51.006159376s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 22:34:32.580434 1585635 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0731 22:34:32.583147 1585635 addons.go:510] duration metric: took 1m58.78463683s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0731 22:34:32.877638 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:35.377360 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:37.377692 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:39.877811 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:42.377875 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:44.877299 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:46.877651 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:49.376702 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:51.376786 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:53.377146 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:55.876547 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:57.876671 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:59.877192 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:35:01.878087 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:35:04.377293 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:35:06.876971 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:35:08.377686 1585635 pod_ready.go:92] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:08.377720 1585635 pod_ready.go:81] duration metric: took 1m46.007003338s for pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:08.377732 1585635 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tjbj7" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:08.383583 1585635 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-tjbj7" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:08.383609 1585635 pod_ready.go:81] duration metric: took 5.867968ms for pod "nvidia-device-plugin-daemonset-tjbj7" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:08.383631 1585635 pod_ready.go:38] duration metric: took 1m47.98733424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:35:08.383648 1585635 api_server.go:52] waiting for apiserver process to appear ...
	I0731 22:35:08.384243 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 22:35:08.384320 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 22:35:08.434388 1585635 cri.go:89] found id: "8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:08.434451 1585635 cri.go:89] found id: ""
	I0731 22:35:08.434472 1585635 logs.go:276] 1 containers: [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc]
	I0731 22:35:08.434561 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.438977 1585635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 22:35:08.439093 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 22:35:08.481395 1585635 cri.go:89] found id: "5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:08.481420 1585635 cri.go:89] found id: ""
	I0731 22:35:08.481428 1585635 logs.go:276] 1 containers: [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699]
	I0731 22:35:08.481508 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.485164 1585635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 22:35:08.485250 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 22:35:08.529715 1585635 cri.go:89] found id: "033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:08.529738 1585635 cri.go:89] found id: ""
	I0731 22:35:08.529761 1585635 logs.go:276] 1 containers: [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b]
	I0731 22:35:08.529848 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.533519 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 22:35:08.533594 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 22:35:08.573691 1585635 cri.go:89] found id: "43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:08.573715 1585635 cri.go:89] found id: ""
	I0731 22:35:08.573723 1585635 logs.go:276] 1 containers: [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d]
	I0731 22:35:08.573811 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.577485 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 22:35:08.577611 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 22:35:08.621799 1585635 cri.go:89] found id: "6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:08.621823 1585635 cri.go:89] found id: ""
	I0731 22:35:08.621831 1585635 logs.go:276] 1 containers: [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc]
	I0731 22:35:08.621912 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.625761 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 22:35:08.625850 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 22:35:08.666050 1585635 cri.go:89] found id: "f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:08.666070 1585635 cri.go:89] found id: ""
	I0731 22:35:08.666079 1585635 logs.go:276] 1 containers: [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf]
	I0731 22:35:08.666133 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.669582 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 22:35:08.669692 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 22:35:08.707843 1585635 cri.go:89] found id: "2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:08.707864 1585635 cri.go:89] found id: ""
	I0731 22:35:08.707872 1585635 logs.go:276] 1 containers: [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757]
	I0731 22:35:08.707936 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.711678 1585635 logs.go:123] Gathering logs for kubelet ...
	I0731 22:35:08.711707 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 22:35:08.794103 1585635 logs.go:123] Gathering logs for coredns [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b] ...
	I0731 22:35:08.794143 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:08.848313 1585635 logs.go:123] Gathering logs for etcd [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699] ...
	I0731 22:35:08.848342 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:08.901253 1585635 logs.go:123] Gathering logs for kube-scheduler [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d] ...
	I0731 22:35:08.901286 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:08.951128 1585635 logs.go:123] Gathering logs for kube-proxy [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc] ...
	I0731 22:35:08.951166 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:08.990548 1585635 logs.go:123] Gathering logs for kube-controller-manager [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf] ...
	I0731 22:35:08.990576 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:09.064518 1585635 logs.go:123] Gathering logs for kindnet [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757] ...
	I0731 22:35:09.064561 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:09.110247 1585635 logs.go:123] Gathering logs for dmesg ...
	I0731 22:35:09.110285 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 22:35:09.134555 1585635 logs.go:123] Gathering logs for describe nodes ...
	I0731 22:35:09.134631 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 22:35:09.325410 1585635 logs.go:123] Gathering logs for kube-apiserver [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc] ...
	I0731 22:35:09.325503 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:09.379782 1585635 logs.go:123] Gathering logs for CRI-O ...
	I0731 22:35:09.379819 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 22:35:09.474280 1585635 logs.go:123] Gathering logs for container status ...
	I0731 22:35:09.474317 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 22:35:12.026789 1585635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:35:12.042395 1585635 api_server.go:72] duration metric: took 2m38.244254871s to wait for apiserver process to appear ...
	I0731 22:35:12.042421 1585635 api_server.go:88] waiting for apiserver healthz status ...
	I0731 22:35:12.042461 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 22:35:12.042524 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 22:35:12.095037 1585635 cri.go:89] found id: "8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:12.095057 1585635 cri.go:89] found id: ""
	I0731 22:35:12.095065 1585635 logs.go:276] 1 containers: [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc]
	I0731 22:35:12.095155 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.099480 1585635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 22:35:12.099560 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 22:35:12.143149 1585635 cri.go:89] found id: "5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:12.143170 1585635 cri.go:89] found id: ""
	I0731 22:35:12.143178 1585635 logs.go:276] 1 containers: [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699]
	I0731 22:35:12.143245 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.146914 1585635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 22:35:12.146984 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 22:35:12.207387 1585635 cri.go:89] found id: "033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:12.207409 1585635 cri.go:89] found id: ""
	I0731 22:35:12.207416 1585635 logs.go:276] 1 containers: [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b]
	I0731 22:35:12.207472 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.210978 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 22:35:12.211052 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 22:35:12.249122 1585635 cri.go:89] found id: "43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:12.249185 1585635 cri.go:89] found id: ""
	I0731 22:35:12.249215 1585635 logs.go:276] 1 containers: [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d]
	I0731 22:35:12.249302 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.253219 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 22:35:12.253312 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 22:35:12.291887 1585635 cri.go:89] found id: "6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:12.291909 1585635 cri.go:89] found id: ""
	I0731 22:35:12.291917 1585635 logs.go:276] 1 containers: [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc]
	I0731 22:35:12.291979 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.295522 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 22:35:12.295605 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 22:35:12.355490 1585635 cri.go:89] found id: "f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:12.355552 1585635 cri.go:89] found id: ""
	I0731 22:35:12.355575 1585635 logs.go:276] 1 containers: [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf]
	I0731 22:35:12.355643 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.359565 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 22:35:12.359644 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 22:35:12.401208 1585635 cri.go:89] found id: "2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:12.401293 1585635 cri.go:89] found id: ""
	I0731 22:35:12.401322 1585635 logs.go:276] 1 containers: [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757]
	I0731 22:35:12.401390 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.404965 1585635 logs.go:123] Gathering logs for CRI-O ...
	I0731 22:35:12.404987 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 22:35:12.501081 1585635 logs.go:123] Gathering logs for dmesg ...
	I0731 22:35:12.501124 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 22:35:12.521644 1585635 logs.go:123] Gathering logs for describe nodes ...
	I0731 22:35:12.521687 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 22:35:12.669511 1585635 logs.go:123] Gathering logs for etcd [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699] ...
	I0731 22:35:12.669544 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:12.725707 1585635 logs.go:123] Gathering logs for kube-proxy [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc] ...
	I0731 22:35:12.725747 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:12.775029 1585635 logs.go:123] Gathering logs for kindnet [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757] ...
	I0731 22:35:12.775057 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:12.821262 1585635 logs.go:123] Gathering logs for container status ...
	I0731 22:35:12.821293 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 22:35:12.871603 1585635 logs.go:123] Gathering logs for kubelet ...
	I0731 22:35:12.871636 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 22:35:12.956264 1585635 logs.go:123] Gathering logs for kube-apiserver [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc] ...
	I0731 22:35:12.956300 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:13.028806 1585635 logs.go:123] Gathering logs for coredns [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b] ...
	I0731 22:35:13.028839 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:13.072609 1585635 logs.go:123] Gathering logs for kube-scheduler [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d] ...
	I0731 22:35:13.072639 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:13.126190 1585635 logs.go:123] Gathering logs for kube-controller-manager [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf] ...
	I0731 22:35:13.126221 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:15.721304 1585635 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0731 22:35:15.730591 1585635 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0731 22:35:15.731562 1585635 api_server.go:141] control plane version: v1.30.3
	I0731 22:35:15.731585 1585635 api_server.go:131] duration metric: took 3.689156807s to wait for apiserver health ...
	I0731 22:35:15.731594 1585635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 22:35:15.731615 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 22:35:15.731677 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 22:35:15.775642 1585635 cri.go:89] found id: "8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:15.775664 1585635 cri.go:89] found id: ""
	I0731 22:35:15.775673 1585635 logs.go:276] 1 containers: [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc]
	I0731 22:35:15.775731 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.779297 1585635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 22:35:15.779370 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 22:35:15.821650 1585635 cri.go:89] found id: "5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:15.821672 1585635 cri.go:89] found id: ""
	I0731 22:35:15.821680 1585635 logs.go:276] 1 containers: [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699]
	I0731 22:35:15.821735 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.825240 1585635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 22:35:15.825324 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 22:35:15.862900 1585635 cri.go:89] found id: "033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:15.862922 1585635 cri.go:89] found id: ""
	I0731 22:35:15.862930 1585635 logs.go:276] 1 containers: [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b]
	I0731 22:35:15.862989 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.866695 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 22:35:15.866771 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 22:35:15.903992 1585635 cri.go:89] found id: "43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:15.904027 1585635 cri.go:89] found id: ""
	I0731 22:35:15.904035 1585635 logs.go:276] 1 containers: [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d]
	I0731 22:35:15.904126 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.907764 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 22:35:15.907861 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 22:35:15.947029 1585635 cri.go:89] found id: "6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:15.947059 1585635 cri.go:89] found id: ""
	I0731 22:35:15.947070 1585635 logs.go:276] 1 containers: [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc]
	I0731 22:35:15.947145 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.950896 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 22:35:15.950990 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 22:35:15.990729 1585635 cri.go:89] found id: "f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:15.990753 1585635 cri.go:89] found id: ""
	I0731 22:35:15.990762 1585635 logs.go:276] 1 containers: [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf]
	I0731 22:35:15.990821 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.994961 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 22:35:15.995033 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 22:35:16.040165 1585635 cri.go:89] found id: "2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:16.040188 1585635 cri.go:89] found id: ""
	I0731 22:35:16.040195 1585635 logs.go:276] 1 containers: [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757]
	I0731 22:35:16.040255 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:16.043890 1585635 logs.go:123] Gathering logs for kube-apiserver [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc] ...
	I0731 22:35:16.043916 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:16.098081 1585635 logs.go:123] Gathering logs for etcd [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699] ...
	I0731 22:35:16.098115 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:16.143838 1585635 logs.go:123] Gathering logs for coredns [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b] ...
	I0731 22:35:16.143871 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:16.187912 1585635 logs.go:123] Gathering logs for kube-controller-manager [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf] ...
	I0731 22:35:16.187942 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:16.276841 1585635 logs.go:123] Gathering logs for kindnet [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757] ...
	I0731 22:35:16.276878 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:16.333324 1585635 logs.go:123] Gathering logs for CRI-O ...
	I0731 22:35:16.333353 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 22:35:16.430440 1585635 logs.go:123] Gathering logs for kubelet ...
	I0731 22:35:16.430477 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 22:35:16.516088 1585635 logs.go:123] Gathering logs for dmesg ...
	I0731 22:35:16.516129 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 22:35:16.535819 1585635 logs.go:123] Gathering logs for describe nodes ...
	I0731 22:35:16.535847 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 22:35:16.670971 1585635 logs.go:123] Gathering logs for kube-scheduler [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d] ...
	I0731 22:35:16.671005 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:16.724764 1585635 logs.go:123] Gathering logs for kube-proxy [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc] ...
	I0731 22:35:16.724795 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:16.764416 1585635 logs.go:123] Gathering logs for container status ...
	I0731 22:35:16.764445 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 22:35:19.328261 1585635 system_pods.go:59] 18 kube-system pods found
	I0731 22:35:19.328302 1585635 system_pods.go:61] "coredns-7db6d8ff4d-qv2pm" [d5cb6a71-36b0-4416-9aae-f244288db1a0] Running
	I0731 22:35:19.328308 1585635 system_pods.go:61] "csi-hostpath-attacher-0" [9777d915-8d6e-4ea4-ad27-bf9a834bd6c8] Running
	I0731 22:35:19.328313 1585635 system_pods.go:61] "csi-hostpath-resizer-0" [0871abf1-f861-4fcb-9565-833ce33eb600] Running
	I0731 22:35:19.328317 1585635 system_pods.go:61] "csi-hostpathplugin-54fjr" [41d14787-618a-4f95-99a0-30ed9a484afe] Running
	I0731 22:35:19.328321 1585635 system_pods.go:61] "etcd-addons-849486" [2f51d3a4-3ccd-4c0c-b36e-039e5b90b582] Running
	I0731 22:35:19.328325 1585635 system_pods.go:61] "kindnet-v5dmr" [36e1674f-d093-4894-a330-acf34f2d862a] Running
	I0731 22:35:19.328330 1585635 system_pods.go:61] "kube-apiserver-addons-849486" [9861d6dd-34df-48f6-989e-a5ec25987ba8] Running
	I0731 22:35:19.328365 1585635 system_pods.go:61] "kube-controller-manager-addons-849486" [86b5d477-353e-4384-850e-2fd9de19a5d5] Running
	I0731 22:35:19.328375 1585635 system_pods.go:61] "kube-ingress-dns-minikube" [93c14fc1-ba7a-4b8f-b1bb-b1a447536081] Running
	I0731 22:35:19.328379 1585635 system_pods.go:61] "kube-proxy-mxw62" [2c575b64-75f4-4f37-98e9-b2cb3f720f73] Running
	I0731 22:35:19.328383 1585635 system_pods.go:61] "kube-scheduler-addons-849486" [7407d088-a557-4b87-a488-37be78f806fd] Running
	I0731 22:35:19.328386 1585635 system_pods.go:61] "metrics-server-c59844bb4-vlxmw" [3c4a50ec-9a60-43e3-9e0c-a91793afab2d] Running
	I0731 22:35:19.328390 1585635 system_pods.go:61] "nvidia-device-plugin-daemonset-tjbj7" [04df05f4-a9ce-4b7d-a544-2ed8988a7f7d] Running
	I0731 22:35:19.328400 1585635 system_pods.go:61] "registry-698f998955-xsv4s" [505680ce-0882-4b35-957c-5038c3ef415e] Running
	I0731 22:35:19.328404 1585635 system_pods.go:61] "registry-proxy-7fzhl" [83d11338-5592-463e-b649-7ab9c5714f7d] Running
	I0731 22:35:19.328407 1585635 system_pods.go:61] "snapshot-controller-745499f584-8d8zk" [9b01ff52-2269-4019-8706-7629a243c597] Running
	I0731 22:35:19.328411 1585635 system_pods.go:61] "snapshot-controller-745499f584-bcqgs" [f7a07cca-0c8c-45e0-9347-78e13677f814] Running
	I0731 22:35:19.328420 1585635 system_pods.go:61] "storage-provisioner" [fef6ed9b-0181-4dcb-b189-cdc918ea4104] Running
	I0731 22:35:19.328442 1585635 system_pods.go:74] duration metric: took 3.59682632s to wait for pod list to return data ...
	I0731 22:35:19.328458 1585635 default_sa.go:34] waiting for default service account to be created ...
	I0731 22:35:19.330830 1585635 default_sa.go:45] found service account: "default"
	I0731 22:35:19.330855 1585635 default_sa.go:55] duration metric: took 2.389752ms for default service account to be created ...
	I0731 22:35:19.330864 1585635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 22:35:19.340704 1585635 system_pods.go:86] 18 kube-system pods found
	I0731 22:35:19.340741 1585635 system_pods.go:89] "coredns-7db6d8ff4d-qv2pm" [d5cb6a71-36b0-4416-9aae-f244288db1a0] Running
	I0731 22:35:19.340749 1585635 system_pods.go:89] "csi-hostpath-attacher-0" [9777d915-8d6e-4ea4-ad27-bf9a834bd6c8] Running
	I0731 22:35:19.340754 1585635 system_pods.go:89] "csi-hostpath-resizer-0" [0871abf1-f861-4fcb-9565-833ce33eb600] Running
	I0731 22:35:19.340759 1585635 system_pods.go:89] "csi-hostpathplugin-54fjr" [41d14787-618a-4f95-99a0-30ed9a484afe] Running
	I0731 22:35:19.340763 1585635 system_pods.go:89] "etcd-addons-849486" [2f51d3a4-3ccd-4c0c-b36e-039e5b90b582] Running
	I0731 22:35:19.340767 1585635 system_pods.go:89] "kindnet-v5dmr" [36e1674f-d093-4894-a330-acf34f2d862a] Running
	I0731 22:35:19.340771 1585635 system_pods.go:89] "kube-apiserver-addons-849486" [9861d6dd-34df-48f6-989e-a5ec25987ba8] Running
	I0731 22:35:19.340776 1585635 system_pods.go:89] "kube-controller-manager-addons-849486" [86b5d477-353e-4384-850e-2fd9de19a5d5] Running
	I0731 22:35:19.340780 1585635 system_pods.go:89] "kube-ingress-dns-minikube" [93c14fc1-ba7a-4b8f-b1bb-b1a447536081] Running
	I0731 22:35:19.340791 1585635 system_pods.go:89] "kube-proxy-mxw62" [2c575b64-75f4-4f37-98e9-b2cb3f720f73] Running
	I0731 22:35:19.340795 1585635 system_pods.go:89] "kube-scheduler-addons-849486" [7407d088-a557-4b87-a488-37be78f806fd] Running
	I0731 22:35:19.340806 1585635 system_pods.go:89] "metrics-server-c59844bb4-vlxmw" [3c4a50ec-9a60-43e3-9e0c-a91793afab2d] Running
	I0731 22:35:19.340810 1585635 system_pods.go:89] "nvidia-device-plugin-daemonset-tjbj7" [04df05f4-a9ce-4b7d-a544-2ed8988a7f7d] Running
	I0731 22:35:19.340816 1585635 system_pods.go:89] "registry-698f998955-xsv4s" [505680ce-0882-4b35-957c-5038c3ef415e] Running
	I0731 22:35:19.340823 1585635 system_pods.go:89] "registry-proxy-7fzhl" [83d11338-5592-463e-b649-7ab9c5714f7d] Running
	I0731 22:35:19.340827 1585635 system_pods.go:89] "snapshot-controller-745499f584-8d8zk" [9b01ff52-2269-4019-8706-7629a243c597] Running
	I0731 22:35:19.340831 1585635 system_pods.go:89] "snapshot-controller-745499f584-bcqgs" [f7a07cca-0c8c-45e0-9347-78e13677f814] Running
	I0731 22:35:19.340837 1585635 system_pods.go:89] "storage-provisioner" [fef6ed9b-0181-4dcb-b189-cdc918ea4104] Running
	I0731 22:35:19.340844 1585635 system_pods.go:126] duration metric: took 9.975023ms to wait for k8s-apps to be running ...
	I0731 22:35:19.340856 1585635 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 22:35:19.340929 1585635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:35:19.355574 1585635 system_svc.go:56] duration metric: took 14.70843ms WaitForService to wait for kubelet
	I0731 22:35:19.355601 1585635 kubeadm.go:582] duration metric: took 2m45.55746659s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:35:19.355623 1585635 node_conditions.go:102] verifying NodePressure condition ...
	I0731 22:35:19.358905 1585635 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 22:35:19.358940 1585635 node_conditions.go:123] node cpu capacity is 2
	I0731 22:35:19.358952 1585635 node_conditions.go:105] duration metric: took 3.32373ms to run NodePressure ...
	I0731 22:35:19.358985 1585635 start.go:241] waiting for startup goroutines ...
	I0731 22:35:19.359000 1585635 start.go:246] waiting for cluster config update ...
	I0731 22:35:19.359016 1585635 start.go:255] writing updated cluster config ...
	I0731 22:35:19.359331 1585635 ssh_runner.go:195] Run: rm -f paused
	I0731 22:35:19.710314 1585635 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 22:35:19.712499 1585635 out.go:177] * Done! kubectl is now configured to use "addons-849486" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 22:39:39 addons-849486 crio[970]: time="2024-07-31 22:39:39.884703167Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=eb382f15-97f6-433a-bd16-0bf7a80eb841 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 22:39:39 addons-849486 crio[970]: time="2024-07-31 22:39:39.885811775Z" level=info msg="Creating container: default/hello-world-app-6778b5fc9f-fgj4r/hello-world-app" id=f31590d1-ec9b-41e8-8ffe-0c3f541c6df1 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 22:39:39 addons-849486 crio[970]: time="2024-07-31 22:39:39.885907118Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 31 22:39:39 addons-849486 crio[970]: time="2024-07-31 22:39:39.904409996Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/49a92ce60a78a5dbf5710966f32b681269a138d290e187849b3de087c7e75e57/merged/etc/passwd: no such file or directory"
	Jul 31 22:39:39 addons-849486 crio[970]: time="2024-07-31 22:39:39.904454878Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/49a92ce60a78a5dbf5710966f32b681269a138d290e187849b3de087c7e75e57/merged/etc/group: no such file or directory"
	Jul 31 22:39:39 addons-849486 crio[970]: time="2024-07-31 22:39:39.946445635Z" level=info msg="Created container d73abe518e12c522eecfe00e4dced1eb25fe58e12050905be6756b466424b186: default/hello-world-app-6778b5fc9f-fgj4r/hello-world-app" id=f31590d1-ec9b-41e8-8ffe-0c3f541c6df1 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 22:39:39 addons-849486 crio[970]: time="2024-07-31 22:39:39.948715362Z" level=info msg="Starting container: d73abe518e12c522eecfe00e4dced1eb25fe58e12050905be6756b466424b186" id=d17850a9-91a2-4164-a9e2-1ba7b7583954 name=/runtime.v1.RuntimeService/StartContainer
	Jul 31 22:39:39 addons-849486 crio[970]: time="2024-07-31 22:39:39.955818755Z" level=info msg="Started container" PID=8803 containerID=d73abe518e12c522eecfe00e4dced1eb25fe58e12050905be6756b466424b186 description=default/hello-world-app-6778b5fc9f-fgj4r/hello-world-app id=d17850a9-91a2-4164-a9e2-1ba7b7583954 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7703dfdb3d8b8ba7c532f6a3e8eec744de2dcac785ce73b82aff5713022de90a
	Jul 31 22:39:40 addons-849486 crio[970]: time="2024-07-31 22:39:40.562597278Z" level=info msg="Removing container: 469b2d62f5576814f0da7b87ccb276228b908434e351e0f66b4a872a43b7b4fa" id=60c9f253-b6aa-47a1-bbd3-3815d78bbcfe name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 22:39:40 addons-849486 crio[970]: time="2024-07-31 22:39:40.582645841Z" level=info msg="Removed container 469b2d62f5576814f0da7b87ccb276228b908434e351e0f66b4a872a43b7b4fa: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=60c9f253-b6aa-47a1-bbd3-3815d78bbcfe name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 22:39:42 addons-849486 crio[970]: time="2024-07-31 22:39:42.261875291Z" level=info msg="Stopping container: 9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b (timeout: 2s)" id=38f8ef94-5ee5-4067-8e6c-3e18c3e742b7 name=/runtime.v1.RuntimeService/StopContainer
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.267783400Z" level=warning msg="Stopping container 9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=38f8ef94-5ee5-4067-8e6c-3e18c3e742b7 name=/runtime.v1.RuntimeService/StopContainer
	Jul 31 22:39:44 addons-849486 conmon[4845]: conmon 9e133944c30b944cfb2c <ninfo>: container 4857 exited with status 137
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.406088239Z" level=info msg="Stopped container 9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b: ingress-nginx/ingress-nginx-controller-6d9bd977d4-bqckx/controller" id=38f8ef94-5ee5-4067-8e6c-3e18c3e742b7 name=/runtime.v1.RuntimeService/StopContainer
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.406673681Z" level=info msg="Stopping pod sandbox: 4b1cb46cac816513ae64c5f04a32c2b6aa688e8cf81455aa4554699fa843c4a9" id=4ec81453-b6c2-4cdc-ae57-295bce7bf30f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.409826367Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-IDSKMLCMI4OGSZPH - [0:0]\n:KUBE-HP-2K7JQARPD3YHVJCX - [0:0]\n-X KUBE-HP-IDSKMLCMI4OGSZPH\n-X KUBE-HP-2K7JQARPD3YHVJCX\nCOMMIT\n"
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.411632417Z" level=info msg="Closing host port tcp:80"
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.411679399Z" level=info msg="Closing host port tcp:443"
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.413238425Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.413276234Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.413446359Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-6d9bd977d4-bqckx Namespace:ingress-nginx ID:4b1cb46cac816513ae64c5f04a32c2b6aa688e8cf81455aa4554699fa843c4a9 UID:97b824ba-9aaa-4a04-839f-fc70bdcb2776 NetNS:/var/run/netns/62392492-35eb-4a27-aef6-19711eddc8b7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.413582703Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-6d9bd977d4-bqckx from CNI network \"kindnet\" (type=ptp)"
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.434042407Z" level=info msg="Stopped pod sandbox: 4b1cb46cac816513ae64c5f04a32c2b6aa688e8cf81455aa4554699fa843c4a9" id=4ec81453-b6c2-4cdc-ae57-295bce7bf30f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.573594520Z" level=info msg="Removing container: 9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b" id=6557eea1-a01c-4b16-9277-e2df262b456f name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 22:39:44 addons-849486 crio[970]: time="2024-07-31 22:39:44.587512535Z" level=info msg="Removed container 9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b: ingress-nginx/ingress-nginx-controller-6d9bd977d4-bqckx/controller" id=6557eea1-a01c-4b16-9277-e2df262b456f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d73abe518e12c       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   7703dfdb3d8b8       hello-world-app-6778b5fc9f-fgj4r
	86fd2c8828f5f       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   d205817c76515       nginx
	aa4fa202f80fe       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago       Running             busybox                   0                   d3199cba3f129       busybox
	fb5857a681582       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server            0                   aebc18b063c49       metrics-server-c59844bb4-vlxmw
	26627e03dfcb8       296b5f799fcd8a39f0e93373bc18787d846c6a2a78a5657b1514831f043c09bf                                                             6 minutes ago       Exited              patch                     2                   ddbc3d0d76e2d       ingress-nginx-admission-patch-59tg2
	cea6d2938f44b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   6 minutes ago       Exited              create                    0                   e4e6f8867afe4       ingress-nginx-admission-create-52v9j
	51ebc2ba4de88       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago       Running             storage-provisioner       0                   f79ff61a5deeb       storage-provisioner
	033ddb4c73fa6       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             6 minutes ago       Running             coredns                   0                   a2d158546536e       coredns-7db6d8ff4d-qv2pm
	2fafcbc5f6d0b       docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a                           7 minutes ago       Running             kindnet-cni               0                   bca3c90519a6e       kindnet-v5dmr
	6cc491c729a8c       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                             7 minutes ago       Running             kube-proxy                0                   242914e32703a       kube-proxy-mxw62
	43e08f3fcd840       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                             7 minutes ago       Running             kube-scheduler            0                   ea8f758c53cee       kube-scheduler-addons-849486
	5fd4a5605ac9a       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             7 minutes ago       Running             etcd                      0                   cdf48c9282148       etcd-addons-849486
	f4494142a4f5f       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                             7 minutes ago       Running             kube-controller-manager   0                   c26a88794ecfb       kube-controller-manager-addons-849486
	8c713658baa17       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                             7 minutes ago       Running             kube-apiserver            0                   4bd849428ece1       kube-apiserver-addons-849486
	
	
	==> coredns [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b] <==
	[INFO] 10.244.0.17:53319 - 749 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002483076s
	[INFO] 10.244.0.17:44981 - 50249 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00064643s
	[INFO] 10.244.0.17:44981 - 85 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000843271s
	[INFO] 10.244.0.17:39094 - 5482 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149776s
	[INFO] 10.244.0.17:39094 - 27246 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191663s
	[INFO] 10.244.0.17:60801 - 58112 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051536s
	[INFO] 10.244.0.17:60801 - 14 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000071048s
	[INFO] 10.244.0.17:53413 - 20028 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084036s
	[INFO] 10.244.0.17:53413 - 38462 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000120189s
	[INFO] 10.244.0.17:51877 - 45460 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001520635s
	[INFO] 10.244.0.17:51877 - 28313 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001797146s
	[INFO] 10.244.0.17:50534 - 20698 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000076545s
	[INFO] 10.244.0.17:50534 - 4568 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000062105s
	[INFO] 10.244.0.20:50649 - 4546 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188299s
	[INFO] 10.244.0.20:50431 - 2796 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000070802s
	[INFO] 10.244.0.20:43809 - 37926 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127269s
	[INFO] 10.244.0.20:58929 - 21717 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081059s
	[INFO] 10.244.0.20:58608 - 48966 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015652s
	[INFO] 10.244.0.20:45714 - 38716 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000219462s
	[INFO] 10.244.0.20:57979 - 21099 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003101594s
	[INFO] 10.244.0.20:44653 - 53362 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003305515s
	[INFO] 10.244.0.20:44185 - 36571 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000921703s
	[INFO] 10.244.0.20:51034 - 62993 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.00088661s
	[INFO] 10.244.0.22:35898 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000197521s
	[INFO] 10.244.0.22:38799 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110457s
	
	
	==> describe nodes <==
	Name:               addons-849486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-849486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=addons-849486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T22_32_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-849486
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:32:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-849486
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:39:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:37:56 +0000   Wed, 31 Jul 2024 22:32:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:37:56 +0000   Wed, 31 Jul 2024 22:32:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:37:56 +0000   Wed, 31 Jul 2024 22:32:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:37:56 +0000   Wed, 31 Jul 2024 22:33:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-849486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3a2ae86de824034b2b2f153488a584b
	  System UUID:                ebbfb0f4-45dd-4e41-b862-ce1e4dc4dac6
	  Boot ID:                    2daee006-f42a-4cec-a0b1-7137cc9806d6
	  Kernel Version:             5.15.0-1066-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  default                     hello-world-app-6778b5fc9f-fgj4r         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 coredns-7db6d8ff4d-qv2pm                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     7m16s
	  kube-system                 etcd-addons-849486                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m29s
	  kube-system                 kindnet-v5dmr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m16s
	  kube-system                 kube-apiserver-addons-849486             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-controller-manager-addons-849486    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-proxy-mxw62                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  kube-system                 kube-scheduler-addons-849486             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 metrics-server-c59844bb4-vlxmw           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m10s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m9s   kube-proxy       
	  Normal  Starting                 7m29s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m29s  kubelet          Node addons-849486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s  kubelet          Node addons-849486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s  kubelet          Node addons-849486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m17s  node-controller  Node addons-849486 event: Registered Node addons-849486 in Controller
	  Normal  NodeReady                6m30s  kubelet          Node addons-849486 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001117] FS-Cache: O-key=[8] 'f8405c0100000000'
	[  +0.000704] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=000000008e8348f1
	[  +0.001059] FS-Cache: N-key=[8] 'f8405c0100000000'
	[  +0.002954] FS-Cache: Duplicate cookie detected
	[  +0.000668] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000993] FS-Cache: O-cookie d=00000000f00cf1f7{9p.inode} n=00000000260dfb27
	[  +0.001074] FS-Cache: O-key=[8] 'f8405c0100000000'
	[  +0.000691] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000914] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=0000000074d3f313
	[  +0.001052] FS-Cache: N-key=[8] 'f8405c0100000000'
	[  +2.888027] FS-Cache: Duplicate cookie detected
	[  +0.004601] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000988] FS-Cache: O-cookie d=00000000f00cf1f7{9p.inode} n=00000000f2a098bb
	[  +0.001052] FS-Cache: O-key=[8] 'f7405c0100000000'
	[  +0.000703] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=000000008e8348f1
	[  +0.001024] FS-Cache: N-key=[8] 'f7405c0100000000'
	[  +0.283755] FS-Cache: Duplicate cookie detected
	[  +0.000701] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000957] FS-Cache: O-cookie d=00000000f00cf1f7{9p.inode} n=00000000541c833f
	[  +0.001032] FS-Cache: O-key=[8] 'fd405c0100000000'
	[  +0.000717] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000931] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=000000006dff21d2
	[  +0.001042] FS-Cache: N-key=[8] 'fd405c0100000000'
	
	
	==> etcd [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699] <==
	{"level":"info","ts":"2024-07-31T22:32:14.037258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T22:32:14.037301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-07-31T22:32:14.037354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T22:32:14.037388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-31T22:32:14.037426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-31T22:32:14.03746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-31T22:32:14.041282Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-849486 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T22:32:14.041492Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T22:32:14.041833Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T22:32:14.045123Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T22:32:14.046722Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T22:32:14.046808Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T22:32:14.046854Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T22:32:14.046818Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T22:32:14.046993Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T22:32:14.047053Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T22:32:14.04858Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-31T22:32:35.279674Z","caller":"traceutil/trace.go:171","msg":"trace[1069801401] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"159.772845ms","start":"2024-07-31T22:32:35.119882Z","end":"2024-07-31T22:32:35.279655Z","steps":["trace[1069801401] 'process raft request'  (duration: 157.074885ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:32:37.452807Z","caller":"traceutil/trace.go:171","msg":"trace[102708275] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"163.332948ms","start":"2024-07-31T22:32:37.289459Z","end":"2024-07-31T22:32:37.452792Z","steps":["trace[102708275] 'process raft request'  (duration: 162.95178ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:32:37.452989Z","caller":"traceutil/trace.go:171","msg":"trace[1929343642] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"163.471483ms","start":"2024-07-31T22:32:37.28951Z","end":"2024-07-31T22:32:37.452981Z","steps":["trace[1929343642] 'process raft request'  (duration: 162.991443ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:32:37.453094Z","caller":"traceutil/trace.go:171","msg":"trace[186713461] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"163.539897ms","start":"2024-07-31T22:32:37.289543Z","end":"2024-07-31T22:32:37.453082Z","steps":["trace[186713461] 'process raft request'  (duration: 162.992133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T22:32:37.883779Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.92596ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T22:32:37.883862Z","caller":"traceutil/trace.go:171","msg":"trace[226505398] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:389; }","duration":"105.018464ms","start":"2024-07-31T22:32:37.778828Z","end":"2024-07-31T22:32:37.883847Z","steps":["trace[226505398] 'agreement among raft nodes before linearized reading'  (duration: 96.672828ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T22:32:37.883977Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.286468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T22:32:37.883995Z","caller":"traceutil/trace.go:171","msg":"trace[627011078] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:0; response_revision:389; }","duration":"105.306923ms","start":"2024-07-31T22:32:37.778683Z","end":"2024-07-31T22:32:37.88399Z","steps":["trace[627011078] 'agreement among raft nodes before linearized reading'  (duration: 96.832894ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:39:49 up  6:22,  0 users,  load average: 0.26, 1.51, 2.31
	Linux addons-849486 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757] <==
	I0731 22:37:39.838731       1 main.go:299] handling current node
	I0731 22:37:49.845228       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:37:49.845336       1 main.go:299] handling current node
	I0731 22:37:59.838094       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:37:59.838131       1 main.go:299] handling current node
	I0731 22:38:09.846252       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:38:09.846287       1 main.go:299] handling current node
	I0731 22:38:19.844342       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:38:19.844380       1 main.go:299] handling current node
	I0731 22:38:29.844329       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:38:29.844363       1 main.go:299] handling current node
	I0731 22:38:39.838132       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:38:39.838168       1 main.go:299] handling current node
	I0731 22:38:49.838108       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:38:49.838233       1 main.go:299] handling current node
	I0731 22:38:59.845411       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:38:59.845444       1 main.go:299] handling current node
	I0731 22:39:09.838182       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:39:09.838215       1 main.go:299] handling current node
	I0731 22:39:19.838268       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:39:19.838306       1 main.go:299] handling current node
	I0731 22:39:29.845510       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:39:29.845544       1 main.go:299] handling current node
	I0731 22:39:39.838956       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:39:39.838988       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc] <==
	E0731 22:35:08.135060       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.27.160:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.27.160:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.27.160:443: connect: connection refused
	I0731 22:35:08.201422       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0731 22:35:29.525066       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46776: use of closed network connection
	E0731 22:35:29.761361       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46804: use of closed network connection
	I0731 22:36:03.013643       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0731 22:36:38.868885       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 22:36:38.868933       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 22:36:38.894153       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 22:36:38.894208       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 22:36:38.934940       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 22:36:38.935059       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 22:36:38.988118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 22:36:38.988304       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0731 22:36:39.000527       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0731 22:36:39.010734       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 22:36:39.010836       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 22:36:39.935554       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 22:36:40.013778       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 22:36:40.038117       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0731 22:36:46.619691       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.213.5"}
	I0731 22:37:12.861173       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0731 22:37:13.910284       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0731 22:37:18.425866       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0731 22:37:18.720084       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.224.86"}
	I0731 22:39:38.570384       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.53.179"}
	
	
	==> kube-controller-manager [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf] <==
	E0731 22:38:31.037954       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:38:31.325823       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:38:31.325859       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:38:42.776818       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:38:42.777307       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:38:49.621766       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:38:49.621801       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:39:02.632229       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:39:02.632267       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:39:16.688844       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:39:16.688886       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:39:28.653133       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:39:28.653170       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 22:39:38.384779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="50.451404ms"
	I0731 22:39:38.412658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="27.830132ms"
	I0731 22:39:38.412932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="44.701µs"
	I0731 22:39:40.632957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="23.886367ms"
	I0731 22:39:40.633164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="86.712µs"
	I0731 22:39:41.227054       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0731 22:39:41.233419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="9.28µs"
	I0731 22:39:41.237172       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0731 22:39:44.257998       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:39:44.258042       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:39:46.490058       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:39:46.490096       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc] <==
	I0731 22:32:38.756380       1 server_linux.go:69] "Using iptables proxy"
	I0731 22:32:39.222800       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0731 22:32:39.901160       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0731 22:32:39.901205       1 server_linux.go:165] "Using iptables Proxier"
	I0731 22:32:39.929284       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0731 22:32:39.948335       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0731 22:32:39.948710       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 22:32:39.948998       1 server.go:872] "Version info" version="v1.30.3"
	I0731 22:32:39.949261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:32:39.950190       1 config.go:192] "Starting service config controller"
	I0731 22:32:39.950265       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 22:32:39.950335       1 config.go:101] "Starting endpoint slice config controller"
	I0731 22:32:39.950372       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 22:32:39.951159       1 config.go:319] "Starting node config controller"
	I0731 22:32:39.951222       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 22:32:40.057789       1 shared_informer.go:320] Caches are synced for node config
	I0731 22:32:40.057831       1 shared_informer.go:320] Caches are synced for service config
	I0731 22:32:40.057861       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d] <==
	W0731 22:32:17.620169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 22:32:17.620187       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 22:32:17.620253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 22:32:17.620269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 22:32:17.620891       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 22:32:17.620916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 22:32:18.498853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 22:32:18.499001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 22:32:18.536885       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 22:32:18.537027       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 22:32:18.595861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:32:18.595982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 22:32:18.606358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 22:32:18.606417       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 22:32:18.614429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 22:32:18.614593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 22:32:18.619012       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 22:32:18.619142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 22:32:18.628895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 22:32:18.628993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 22:32:18.716770       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 22:32:18.716806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 22:32:18.722342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 22:32:18.722471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0731 22:32:19.112195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 22:39:38 addons-849486 kubelet[1542]: E0731 22:39:38.370060    1542 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecb73901-bf5b-411e-aef7-774775de16e5" containerName="gadget"
	Jul 31 22:39:38 addons-849486 kubelet[1542]: E0731 22:39:38.370074    1542 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecb73901-bf5b-411e-aef7-774775de16e5" containerName="gadget"
	Jul 31 22:39:38 addons-849486 kubelet[1542]: I0731 22:39:38.370107    1542 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb73901-bf5b-411e-aef7-774775de16e5" containerName="gadget"
	Jul 31 22:39:38 addons-849486 kubelet[1542]: I0731 22:39:38.370115    1542 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb73901-bf5b-411e-aef7-774775de16e5" containerName="gadget"
	Jul 31 22:39:38 addons-849486 kubelet[1542]: I0731 22:39:38.370121    1542 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb73901-bf5b-411e-aef7-774775de16e5" containerName="gadget"
	Jul 31 22:39:38 addons-849486 kubelet[1542]: I0731 22:39:38.469342    1542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v94h7\" (UniqueName: \"kubernetes.io/projected/159b0821-e199-48c1-ad97-6582a6f7bffd-kube-api-access-v94h7\") pod \"hello-world-app-6778b5fc9f-fgj4r\" (UID: \"159b0821-e199-48c1-ad97-6582a6f7bffd\") " pod="default/hello-world-app-6778b5fc9f-fgj4r"
	Jul 31 22:39:39 addons-849486 kubelet[1542]: I0731 22:39:39.678308    1542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlrz4\" (UniqueName: \"kubernetes.io/projected/93c14fc1-ba7a-4b8f-b1bb-b1a447536081-kube-api-access-rlrz4\") pod \"93c14fc1-ba7a-4b8f-b1bb-b1a447536081\" (UID: \"93c14fc1-ba7a-4b8f-b1bb-b1a447536081\") "
	Jul 31 22:39:39 addons-849486 kubelet[1542]: I0731 22:39:39.683992    1542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c14fc1-ba7a-4b8f-b1bb-b1a447536081-kube-api-access-rlrz4" (OuterVolumeSpecName: "kube-api-access-rlrz4") pod "93c14fc1-ba7a-4b8f-b1bb-b1a447536081" (UID: "93c14fc1-ba7a-4b8f-b1bb-b1a447536081"). InnerVolumeSpecName "kube-api-access-rlrz4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 22:39:39 addons-849486 kubelet[1542]: I0731 22:39:39.779358    1542 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rlrz4\" (UniqueName: \"kubernetes.io/projected/93c14fc1-ba7a-4b8f-b1bb-b1a447536081-kube-api-access-rlrz4\") on node \"addons-849486\" DevicePath \"\""
	Jul 31 22:39:40 addons-849486 kubelet[1542]: I0731 22:39:40.561185    1542 scope.go:117] "RemoveContainer" containerID="469b2d62f5576814f0da7b87ccb276228b908434e351e0f66b4a872a43b7b4fa"
	Jul 31 22:39:41 addons-849486 kubelet[1542]: I0731 22:39:41.250660    1542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-fgj4r" podStartSLOduration=2.115033241 podStartE2EDuration="3.250639177s" podCreationTimestamp="2024-07-31 22:39:38 +0000 UTC" firstStartedPulling="2024-07-31 22:39:38.746037894 +0000 UTC m=+438.810854686" lastFinishedPulling="2024-07-31 22:39:39.881643814 +0000 UTC m=+439.946460622" observedRunningTime="2024-07-31 22:39:40.608052638 +0000 UTC m=+440.672869438" watchObservedRunningTime="2024-07-31 22:39:41.250639177 +0000 UTC m=+441.315455969"
	Jul 31 22:39:42 addons-849486 kubelet[1542]: I0731 22:39:42.094105    1542 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21440a69-54fa-4c39-884b-b84c2c67f6ca" path="/var/lib/kubelet/pods/21440a69-54fa-4c39-884b-b84c2c67f6ca/volumes"
	Jul 31 22:39:42 addons-849486 kubelet[1542]: I0731 22:39:42.094556    1542 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5698189d-ec64-4473-a920-13fee360d4fa" path="/var/lib/kubelet/pods/5698189d-ec64-4473-a920-13fee360d4fa/volumes"
	Jul 31 22:39:42 addons-849486 kubelet[1542]: I0731 22:39:42.094967    1542 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c14fc1-ba7a-4b8f-b1bb-b1a447536081" path="/var/lib/kubelet/pods/93c14fc1-ba7a-4b8f-b1bb-b1a447536081/volumes"
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.513137    1542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hst2r\" (UniqueName: \"kubernetes.io/projected/97b824ba-9aaa-4a04-839f-fc70bdcb2776-kube-api-access-hst2r\") pod \"97b824ba-9aaa-4a04-839f-fc70bdcb2776\" (UID: \"97b824ba-9aaa-4a04-839f-fc70bdcb2776\") "
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.513201    1542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97b824ba-9aaa-4a04-839f-fc70bdcb2776-webhook-cert\") pod \"97b824ba-9aaa-4a04-839f-fc70bdcb2776\" (UID: \"97b824ba-9aaa-4a04-839f-fc70bdcb2776\") "
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.518133    1542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b824ba-9aaa-4a04-839f-fc70bdcb2776-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "97b824ba-9aaa-4a04-839f-fc70bdcb2776" (UID: "97b824ba-9aaa-4a04-839f-fc70bdcb2776"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.518245    1542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b824ba-9aaa-4a04-839f-fc70bdcb2776-kube-api-access-hst2r" (OuterVolumeSpecName: "kube-api-access-hst2r") pod "97b824ba-9aaa-4a04-839f-fc70bdcb2776" (UID: "97b824ba-9aaa-4a04-839f-fc70bdcb2776"). InnerVolumeSpecName "kube-api-access-hst2r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.572050    1542 scope.go:117] "RemoveContainer" containerID="9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b"
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.587817    1542 scope.go:117] "RemoveContainer" containerID="9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b"
	Jul 31 22:39:44 addons-849486 kubelet[1542]: E0731 22:39:44.588195    1542 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b\": container with ID starting with 9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b not found: ID does not exist" containerID="9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b"
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.588233    1542 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b"} err="failed to get container status \"9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b\": rpc error: code = NotFound desc = could not find container \"9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b\": container with ID starting with 9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b not found: ID does not exist"
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.613567    1542 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hst2r\" (UniqueName: \"kubernetes.io/projected/97b824ba-9aaa-4a04-839f-fc70bdcb2776-kube-api-access-hst2r\") on node \"addons-849486\" DevicePath \"\""
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.613600    1542 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97b824ba-9aaa-4a04-839f-fc70bdcb2776-webhook-cert\") on node \"addons-849486\" DevicePath \"\""
	Jul 31 22:39:46 addons-849486 kubelet[1542]: I0731 22:39:46.093182    1542 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b824ba-9aaa-4a04-839f-fc70bdcb2776" path="/var/lib/kubelet/pods/97b824ba-9aaa-4a04-839f-fc70bdcb2776/volumes"
	
	
	==> storage-provisioner [51ebc2ba4de88879295ed4972dd5fd4dbfc779bace166a2294f70f146b15149d] <==
	I0731 22:33:20.916280       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 22:33:20.931522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 22:33:20.931771       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 22:33:20.948102       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 22:33:20.948476       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-849486_902c7b87-d25f-41db-97be-e918f64904d7!
	I0731 22:33:20.949474       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bac6edc7-406f-4e5a-bd10-08a1792a5d05", APIVersion:"v1", ResourceVersion:"910", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-849486_902c7b87-d25f-41db-97be-e918f64904d7 became leader
	I0731 22:33:21.049221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-849486_902c7b87-d25f-41db-97be-e918f64904d7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-849486 -n addons-849486
helpers_test.go:261: (dbg) Run:  kubectl --context addons-849486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.63s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (353.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.301987ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-vlxmw" [3c4a50ec-9a60-43e3-9e0c-a91793afab2d] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004231656s
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (89.58403ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 4m36.485135259s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (89.054835ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 4m40.210272978s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (92.85506ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 4m43.486596376s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (93.429844ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 4m51.299766502s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (90.596327ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 5m2.641730065s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (88.22493ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 5m19.893256597s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (92.795924ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 5m37.737194337s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (87.855157ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 6m25.28835277s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (90.449158ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 7m18.841294979s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (109.039068ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 8m6.053823081s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (90.135354ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 9m5.682347994s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-849486 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-849486 top pods -n kube-system: exit status 1 (91.877394ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-qv2pm, age: 10m21.075474546s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-849486
helpers_test.go:235: (dbg) docker inspect addons-849486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf",
	        "Created": "2024-07-31T22:31:58.031500435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1586136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-31T22:31:58.175549775Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf/hostname",
	        "HostsPath": "/var/lib/docker/containers/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf/hosts",
	        "LogPath": "/var/lib/docker/containers/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf-json.log",
	        "Name": "/addons-849486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-849486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-849486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/31353d832d5316f2cf2b976850e7985a5e9ed94c5773a72d9eb63ee2765a9f8c-init/diff:/var/lib/docker/overlay2/a3c8edb55465dd5b1044de542fb24c31e00154ba5ba4e9841112d37a01d06a98/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31353d832d5316f2cf2b976850e7985a5e9ed94c5773a72d9eb63ee2765a9f8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31353d832d5316f2cf2b976850e7985a5e9ed94c5773a72d9eb63ee2765a9f8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31353d832d5316f2cf2b976850e7985a5e9ed94c5773a72d9eb63ee2765a9f8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-849486",
	                "Source": "/var/lib/docker/volumes/addons-849486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-849486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-849486",
	                "name.minikube.sigs.k8s.io": "addons-849486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d1aa9829e876ee8e974ccd612766a7eb8a4d370a60753ebc330163f34fbac0c",
	            "SandboxKey": "/var/run/docker/netns/5d1aa9829e87",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34641"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34642"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34645"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34643"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34644"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-849486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6cf7ab4ffd119fe4bd883867754b7e9719f07178da2ab1a73467da450a3e07e7",
	                    "EndpointID": "cada104ec607bdc42b2e078063641c818fc4743dfe2fcc7202b917eaa23229af",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-849486",
	                        "110805b36784"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-849486 -n addons-849486
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-849486 logs -n 25: (1.412695673s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-364967 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | download-docker-364967                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-364967                                                                   | download-docker-364967 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:31 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-970793   | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | binary-mirror-970793                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38085                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-970793                                                                     | binary-mirror-970793   | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:31 UTC |
	| addons  | disable dashboard -p                                                                        | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | addons-849486                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | addons-849486                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-849486 --wait=true                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:35 UTC | 31 Jul 24 22:35 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-849486 ip                                                                            | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:35 UTC | 31 Jul 24 22:35 UTC |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:35 UTC | 31 Jul 24 22:35 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | -p addons-849486                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-849486 ssh cat                                                                       | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | /opt/local-path-provisioner/pvc-9f22855a-010f-402c-a661-b7cd21d58d00_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:37 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-849486 addons                                                                        | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-849486 addons                                                                        | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | addons-849486                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:36 UTC |
	|         | -p addons-849486                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:36 UTC | 31 Jul 24 22:37 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:37 UTC | 31 Jul 24 22:37 UTC |
	|         | addons-849486                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-849486 ssh curl -s                                                                   | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:37 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-849486 ip                                                                            | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:39 UTC | 31 Jul 24 22:39 UTC |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:39 UTC | 31 Jul 24 22:39 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-849486 addons disable                                                                | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:39 UTC | 31 Jul 24 22:39 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-849486 addons                                                                        | addons-849486          | jenkins | v1.33.1 | 31 Jul 24 22:42 UTC | 31 Jul 24 22:42 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:31:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:31:33.510910 1585635 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:31:33.511056 1585635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:31:33.511067 1585635 out.go:304] Setting ErrFile to fd 2...
	I0731 22:31:33.511072 1585635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:31:33.511314 1585635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 22:31:33.511782 1585635 out.go:298] Setting JSON to false
	I0731 22:31:33.512674 1585635 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22432,"bootTime":1722442662,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 22:31:33.512750 1585635 start.go:139] virtualization:  
	I0731 22:31:33.515566 1585635 out.go:177] * [addons-849486] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0731 22:31:33.518249 1585635 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 22:31:33.518426 1585635 notify.go:220] Checking for updates...
	I0731 22:31:33.522575 1585635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:31:33.524914 1585635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 22:31:33.527037 1585635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	I0731 22:31:33.529620 1585635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 22:31:33.531957 1585635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:31:33.534274 1585635 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:31:33.558578 1585635 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0731 22:31:33.558688 1585635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:31:33.617311 1585635 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-31 22:31:33.60785426 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:31:33.617437 1585635 docker.go:307] overlay module found
	I0731 22:31:33.624625 1585635 out.go:177] * Using the docker driver based on user configuration
	I0731 22:31:33.626637 1585635 start.go:297] selected driver: docker
	I0731 22:31:33.626655 1585635 start.go:901] validating driver "docker" against <nil>
	I0731 22:31:33.626669 1585635 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:31:33.627317 1585635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:31:33.698595 1585635 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-31 22:31:33.689787382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:31:33.698758 1585635 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 22:31:33.698984 1585635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:31:33.701137 1585635 out.go:177] * Using Docker driver with root privileges
	I0731 22:31:33.703587 1585635 cni.go:84] Creating CNI manager for ""
	I0731 22:31:33.703608 1585635 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 22:31:33.703620 1585635 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 22:31:33.703718 1585635 start.go:340] cluster config:
	{Name:addons-849486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-849486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:31:33.706140 1585635 out.go:177] * Starting "addons-849486" primary control-plane node in "addons-849486" cluster
	I0731 22:31:33.707944 1585635 cache.go:121] Beginning downloading kic base image for docker with crio
	I0731 22:31:33.709697 1585635 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0731 22:31:33.711636 1585635 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:31:33.711690 1585635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0731 22:31:33.711703 1585635 cache.go:56] Caching tarball of preloaded images
	I0731 22:31:33.711784 1585635 preload.go:172] Found /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0731 22:31:33.711800 1585635 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 22:31:33.712145 1585635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/config.json ...
	I0731 22:31:33.712179 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/config.json: {Name:mk9a181ce5af1abc5c2aaae723da67339e76d270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:33.712298 1585635 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 22:31:33.727715 1585635 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 22:31:33.727831 1585635 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 22:31:33.727855 1585635 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 22:31:33.727864 1585635 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 22:31:33.727872 1585635 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 22:31:33.727877 1585635 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0731 22:31:50.380327 1585635 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0731 22:31:50.380368 1585635 cache.go:194] Successfully downloaded all kic artifacts
	I0731 22:31:50.380427 1585635 start.go:360] acquireMachinesLock for addons-849486: {Name:mk26524a28b5e05c49d38e8337baa6f991516659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:31:50.380549 1585635 start.go:364] duration metric: took 98.075µs to acquireMachinesLock for "addons-849486"
	I0731 22:31:50.380580 1585635 start.go:93] Provisioning new machine with config: &{Name:addons-849486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-849486 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:31:50.380662 1585635 start.go:125] createHost starting for "" (driver="docker")
	I0731 22:31:50.383583 1585635 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0731 22:31:50.383848 1585635 start.go:159] libmachine.API.Create for "addons-849486" (driver="docker")
	I0731 22:31:50.383891 1585635 client.go:168] LocalClient.Create starting
	I0731 22:31:50.384022 1585635 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem
	I0731 22:31:50.616017 1585635 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem
	I0731 22:31:51.403220 1585635 cli_runner.go:164] Run: docker network inspect addons-849486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 22:31:51.418781 1585635 cli_runner.go:211] docker network inspect addons-849486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 22:31:51.418872 1585635 network_create.go:284] running [docker network inspect addons-849486] to gather additional debugging logs...
	I0731 22:31:51.418899 1585635 cli_runner.go:164] Run: docker network inspect addons-849486
	W0731 22:31:51.434073 1585635 cli_runner.go:211] docker network inspect addons-849486 returned with exit code 1
	I0731 22:31:51.434106 1585635 network_create.go:287] error running [docker network inspect addons-849486]: docker network inspect addons-849486: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-849486 not found
	I0731 22:31:51.434126 1585635 network_create.go:289] output of [docker network inspect addons-849486]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-849486 not found
	
	** /stderr **
	I0731 22:31:51.434223 1585635 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 22:31:51.450170 1585635 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001752fa0}
	I0731 22:31:51.450217 1585635 network_create.go:124] attempt to create docker network addons-849486 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0731 22:31:51.450278 1585635 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-849486 addons-849486
	I0731 22:31:51.518582 1585635 network_create.go:108] docker network addons-849486 192.168.49.0/24 created
	I0731 22:31:51.518620 1585635 kic.go:121] calculated static IP "192.168.49.2" for the "addons-849486" container
	I0731 22:31:51.518694 1585635 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 22:31:51.534367 1585635 cli_runner.go:164] Run: docker volume create addons-849486 --label name.minikube.sigs.k8s.io=addons-849486 --label created_by.minikube.sigs.k8s.io=true
	I0731 22:31:51.550715 1585635 oci.go:103] Successfully created a docker volume addons-849486
	I0731 22:31:51.550797 1585635 cli_runner.go:164] Run: docker run --rm --name addons-849486-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-849486 --entrypoint /usr/bin/test -v addons-849486:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 22:31:53.670897 1585635 cli_runner.go:217] Completed: docker run --rm --name addons-849486-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-849486 --entrypoint /usr/bin/test -v addons-849486:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib: (2.120063761s)
	I0731 22:31:53.670929 1585635 oci.go:107] Successfully prepared a docker volume addons-849486
	I0731 22:31:53.670942 1585635 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:31:53.670961 1585635 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 22:31:53.671041 1585635 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-849486:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 22:31:57.953660 1585635 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-849486:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (4.282570705s)
	I0731 22:31:57.953703 1585635 kic.go:203] duration metric: took 4.282738778s to extract preloaded images to volume ...
	W0731 22:31:57.953835 1585635 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 22:31:57.953956 1585635 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 22:31:58.016808 1585635 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-849486 --name addons-849486 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-849486 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-849486 --network addons-849486 --ip 192.168.49.2 --volume addons-849486:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0731 22:31:58.357833 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Running}}
	I0731 22:31:58.384035 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:31:58.405395 1585635 cli_runner.go:164] Run: docker exec addons-849486 stat /var/lib/dpkg/alternatives/iptables
	I0731 22:31:58.465013 1585635 oci.go:144] the created container "addons-849486" has a running status.
	I0731 22:31:58.465042 1585635 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa...
	I0731 22:31:59.092003 1585635 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 22:31:59.117851 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:31:59.149149 1585635 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 22:31:59.149225 1585635 kic_runner.go:114] Args: [docker exec --privileged addons-849486 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 22:31:59.217221 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:31:59.242987 1585635 machine.go:94] provisionDockerMachine start ...
	I0731 22:31:59.243070 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:31:59.260682 1585635 main.go:141] libmachine: Using SSH client type: native
	I0731 22:31:59.260954 1585635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I0731 22:31:59.260963 1585635 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 22:31:59.400956 1585635 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-849486
	
	I0731 22:31:59.401025 1585635 ubuntu.go:169] provisioning hostname "addons-849486"
	I0731 22:31:59.401147 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:31:59.422155 1585635 main.go:141] libmachine: Using SSH client type: native
	I0731 22:31:59.422577 1585635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I0731 22:31:59.422672 1585635 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-849486 && echo "addons-849486" | sudo tee /etc/hostname
	I0731 22:31:59.573922 1585635 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-849486
	
	I0731 22:31:59.574080 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:31:59.591094 1585635 main.go:141] libmachine: Using SSH client type: native
	I0731 22:31:59.591348 1585635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I0731 22:31:59.591365 1585635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-849486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-849486/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-849486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:31:59.725702 1585635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:31:59.725788 1585635 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1579223/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1579223/.minikube}
	I0731 22:31:59.725842 1585635 ubuntu.go:177] setting up certificates
	I0731 22:31:59.725880 1585635 provision.go:84] configureAuth start
	I0731 22:31:59.725975 1585635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-849486
	I0731 22:31:59.743652 1585635 provision.go:143] copyHostCerts
	I0731 22:31:59.743732 1585635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.pem (1082 bytes)
	I0731 22:31:59.743848 1585635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/cert.pem (1123 bytes)
	I0731 22:31:59.743902 1585635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/key.pem (1679 bytes)
	I0731 22:31:59.743948 1585635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem org=jenkins.addons-849486 san=[127.0.0.1 192.168.49.2 addons-849486 localhost minikube]
	I0731 22:32:00.282847 1585635 provision.go:177] copyRemoteCerts
	I0731 22:32:00.283201 1585635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:32:00.283381 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:00.324981 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:00.436508 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 22:32:00.474009 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 22:32:00.508610 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 22:32:00.539121 1585635 provision.go:87] duration metric: took 813.210459ms to configureAuth
	I0731 22:32:00.539152 1585635 ubuntu.go:193] setting minikube options for container-runtime
	I0731 22:32:00.539377 1585635 config.go:182] Loaded profile config "addons-849486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:32:00.539491 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:00.559506 1585635 main.go:141] libmachine: Using SSH client type: native
	I0731 22:32:00.559809 1585635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I0731 22:32:00.559825 1585635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 22:32:00.800269 1585635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 22:32:00.800312 1585635 machine.go:97] duration metric: took 1.557306826s to provisionDockerMachine
	I0731 22:32:00.800324 1585635 client.go:171] duration metric: took 10.416423921s to LocalClient.Create
	I0731 22:32:00.800343 1585635 start.go:167] duration metric: took 10.416495913s to libmachine.API.Create "addons-849486"
	I0731 22:32:00.800391 1585635 start.go:293] postStartSetup for "addons-849486" (driver="docker")
	I0731 22:32:00.800423 1585635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:32:00.800519 1585635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:32:00.800670 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:00.820591 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:00.918581 1585635 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:32:00.921906 1585635 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 22:32:00.921943 1585635 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 22:32:00.921973 1585635 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 22:32:00.921987 1585635 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0731 22:32:00.921999 1585635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1579223/.minikube/addons for local assets ...
	I0731 22:32:00.922082 1585635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1579223/.minikube/files for local assets ...
	I0731 22:32:00.922109 1585635 start.go:296] duration metric: took 121.698256ms for postStartSetup
	I0731 22:32:00.922429 1585635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-849486
	I0731 22:32:00.939252 1585635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/config.json ...
	I0731 22:32:00.939628 1585635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:32:00.939693 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:00.957116 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:01.046193 1585635 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 22:32:01.050758 1585635 start.go:128] duration metric: took 10.670079194s to createHost
	I0731 22:32:01.050784 1585635 start.go:83] releasing machines lock for "addons-849486", held for 10.670221536s
	I0731 22:32:01.050879 1585635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-849486
	I0731 22:32:01.073367 1585635 ssh_runner.go:195] Run: cat /version.json
	I0731 22:32:01.073429 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:01.073530 1585635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 22:32:01.073603 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:01.098312 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:01.110781 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:01.318489 1585635 ssh_runner.go:195] Run: systemctl --version
	I0731 22:32:01.322881 1585635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 22:32:01.466230 1585635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 22:32:01.470564 1585635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:32:01.491616 1585635 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 22:32:01.491694 1585635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:32:01.523532 1585635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 22:32:01.523552 1585635 start.go:495] detecting cgroup driver to use...
	I0731 22:32:01.523585 1585635 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0731 22:32:01.523633 1585635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:32:01.540473 1585635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:32:01.552305 1585635 docker.go:217] disabling cri-docker service (if available) ...
	I0731 22:32:01.552368 1585635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 22:32:01.566067 1585635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 22:32:01.581193 1585635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 22:32:01.673185 1585635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 22:32:01.777561 1585635 docker.go:233] disabling docker service ...
	I0731 22:32:01.777638 1585635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 22:32:01.797497 1585635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 22:32:01.810203 1585635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 22:32:01.914815 1585635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 22:32:02.020469 1585635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 22:32:02.033289 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:32:02.051363 1585635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 22:32:02.051493 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.062424 1585635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 22:32:02.062541 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.073475 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.083339 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.093221 1585635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:32:02.102492 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.112558 1585635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.128684 1585635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:32:02.138839 1585635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:32:02.147528 1585635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:32:02.156071 1585635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:32:02.253217 1585635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 22:32:02.383925 1585635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 22:32:02.384019 1585635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 22:32:02.387942 1585635 start.go:563] Will wait 60s for crictl version
	I0731 22:32:02.388026 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:32:02.391317 1585635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:32:02.430786 1585635 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 22:32:02.430912 1585635 ssh_runner.go:195] Run: crio --version
	I0731 22:32:02.470262 1585635 ssh_runner.go:195] Run: crio --version
	I0731 22:32:02.511268 1585635 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0731 22:32:02.513265 1585635 cli_runner.go:164] Run: docker network inspect addons-849486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 22:32:02.529163 1585635 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0731 22:32:02.532739 1585635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:32:02.543485 1585635 kubeadm.go:883] updating cluster {Name:addons-849486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-849486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 22:32:02.543610 1585635 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:32:02.543670 1585635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:32:02.621378 1585635 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 22:32:02.621405 1585635 crio.go:433] Images already preloaded, skipping extraction
	I0731 22:32:02.621464 1585635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:32:02.656117 1585635 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 22:32:02.656137 1585635 cache_images.go:84] Images are preloaded, skipping loading
	I0731 22:32:02.656145 1585635 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0731 22:32:02.656251 1585635 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-849486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-849486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:32:02.656338 1585635 ssh_runner.go:195] Run: crio config
	I0731 22:32:02.705770 1585635 cni.go:84] Creating CNI manager for ""
	I0731 22:32:02.705795 1585635 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 22:32:02.705805 1585635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 22:32:02.705841 1585635 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-849486 NodeName:addons-849486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 22:32:02.705994 1585635 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-849486"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 22:32:02.706070 1585635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:32:02.714917 1585635 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 22:32:02.715010 1585635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 22:32:02.723591 1585635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0731 22:32:02.741776 1585635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:32:02.760650 1585635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0731 22:32:02.778991 1585635 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0731 22:32:02.782294 1585635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:32:02.792929 1585635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:32:02.877740 1585635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:32:02.891735 1585635 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486 for IP: 192.168.49.2
	I0731 22:32:02.891756 1585635 certs.go:194] generating shared ca certs ...
	I0731 22:32:02.891772 1585635 certs.go:226] acquiring lock for ca certs: {Name:mk6ccdabf08b8b9bfa2ad4dfbceb108d85e42085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:02.891908 1585635 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key
	I0731 22:32:03.144057 1585635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt ...
	I0731 22:32:03.144090 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt: {Name:mk017a40da3591fd0208865b47278b382b71fea7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:03.144320 1585635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key ...
	I0731 22:32:03.144335 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key: {Name:mke8ea525ba4233d2e7fbc91d4e136fa0e33fe49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:03.145442 1585635 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key
	I0731 22:32:03.829130 1585635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.crt ...
	I0731 22:32:03.829172 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.crt: {Name:mk5a0f34fdcacd89f0c298d2b42166e20350c428 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:03.829395 1585635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key ...
	I0731 22:32:03.829411 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key: {Name:mkf2af353dde1d471523c82d104c539eb6e2321f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:03.829984 1585635 certs.go:256] generating profile certs ...
	I0731 22:32:03.830056 1585635 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.key
	I0731 22:32:03.830075 1585635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt with IP's: []
	I0731 22:32:04.137106 1585635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt ...
	I0731 22:32:04.137138 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: {Name:mk4f2ef72148f6dd85edf5d60d243b4e64d61e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.137337 1585635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.key ...
	I0731 22:32:04.137351 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.key: {Name:mkda5591463fb8cab8138f91ff275cae5ae73033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.137437 1585635 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key.a4894f25
	I0731 22:32:04.137457 1585635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt.a4894f25 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0731 22:32:04.500598 1585635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt.a4894f25 ...
	I0731 22:32:04.500634 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt.a4894f25: {Name:mk2022dfacca8e76930277a08005fc059318f27d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.501398 1585635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key.a4894f25 ...
	I0731 22:32:04.501425 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key.a4894f25: {Name:mk5ed3e5dbeb85607e558b6f3dc86dc1dc1a1b38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.502061 1585635 certs.go:381] copying /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt.a4894f25 -> /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt
	I0731 22:32:04.502151 1585635 certs.go:385] copying /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key.a4894f25 -> /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key
	I0731 22:32:04.502207 1585635 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.key
	I0731 22:32:04.502232 1585635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.crt with IP's: []
	I0731 22:32:04.964226 1585635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.crt ...
	I0731 22:32:04.964261 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.crt: {Name:mka1f7ab818bd63f572762f7fa1e41c03bf06e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.964859 1585635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.key ...
	I0731 22:32:04.964880 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.key: {Name:mk760ddc11a470c36d6494f0b44f6af495dbbc35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:04.965639 1585635 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 22:32:04.965689 1585635 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem (1082 bytes)
	I0731 22:32:04.965722 1585635 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem (1123 bytes)
	I0731 22:32:04.965753 1585635 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem (1679 bytes)
	I0731 22:32:04.966426 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:32:04.991331 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 22:32:05.019290 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:32:05.046812 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 22:32:05.072561 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 22:32:05.099620 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 22:32:05.126498 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:32:05.152268 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:32:05.179127 1585635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:32:05.204360 1585635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 22:32:05.223265 1585635 ssh_runner.go:195] Run: openssl version
	I0731 22:32:05.228952 1585635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:32:05.238846 1585635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:32:05.242646 1585635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:32:05.242715 1585635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:32:05.249842 1585635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:32:05.259553 1585635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:32:05.263159 1585635 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:32:05.263228 1585635 kubeadm.go:392] StartCluster: {Name:addons-849486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-849486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:32:05.263367 1585635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 22:32:05.263432 1585635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 22:32:05.318630 1585635 cri.go:89] found id: ""
	I0731 22:32:05.318751 1585635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 22:32:05.329944 1585635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 22:32:05.339184 1585635 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0731 22:32:05.339314 1585635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 22:32:05.350964 1585635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 22:32:05.351032 1585635 kubeadm.go:157] found existing configuration files:
	
	I0731 22:32:05.351108 1585635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 22:32:05.360969 1585635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 22:32:05.361082 1585635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 22:32:05.370069 1585635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 22:32:05.380250 1585635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 22:32:05.380372 1585635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 22:32:05.390340 1585635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 22:32:05.400328 1585635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 22:32:05.400446 1585635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 22:32:05.410716 1585635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 22:32:05.421313 1585635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 22:32:05.421389 1585635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 22:32:05.430096 1585635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 22:32:05.478756 1585635 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 22:32:05.479025 1585635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 22:32:05.519973 1585635 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0731 22:32:05.520131 1585635 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-aws
	I0731 22:32:05.520210 1585635 kubeadm.go:310] OS: Linux
	I0731 22:32:05.520286 1585635 kubeadm.go:310] CGROUPS_CPU: enabled
	I0731 22:32:05.520368 1585635 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0731 22:32:05.520450 1585635 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0731 22:32:05.520530 1585635 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0731 22:32:05.520608 1585635 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0731 22:32:05.520690 1585635 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0731 22:32:05.520768 1585635 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0731 22:32:05.520847 1585635 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0731 22:32:05.520926 1585635 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0731 22:32:05.592372 1585635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 22:32:05.592589 1585635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 22:32:05.592734 1585635 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 22:32:05.828080 1585635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 22:32:05.832068 1585635 out.go:204]   - Generating certificates and keys ...
	I0731 22:32:05.832277 1585635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 22:32:05.832419 1585635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 22:32:06.031019 1585635 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 22:32:06.382129 1585635 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 22:32:06.607331 1585635 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 22:32:06.902455 1585635 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 22:32:07.648129 1585635 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 22:32:07.648340 1585635 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-849486 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 22:32:08.964997 1585635 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 22:32:08.965364 1585635 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-849486 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 22:32:09.356903 1585635 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 22:32:09.878965 1585635 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 22:32:10.044142 1585635 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 22:32:10.044552 1585635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 22:32:10.425788 1585635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 22:32:10.657351 1585635 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 22:32:10.878485 1585635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 22:32:11.715065 1585635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 22:32:12.238190 1585635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 22:32:12.240742 1585635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 22:32:12.243742 1585635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 22:32:12.246269 1585635 out.go:204]   - Booting up control plane ...
	I0731 22:32:12.246380 1585635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 22:32:12.246461 1585635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 22:32:12.246986 1585635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 22:32:12.257091 1585635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 22:32:12.258815 1585635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 22:32:12.258975 1585635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 22:32:12.354413 1585635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 22:32:12.354506 1585635 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 22:32:13.356027 1585635 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001702112s
	I0731 22:32:13.356115 1585635 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 22:32:19.357411 1585635 kubeadm.go:310] [api-check] The API server is healthy after 6.001342434s
	I0731 22:32:19.378307 1585635 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 22:32:19.390403 1585635 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 22:32:19.417717 1585635 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 22:32:19.417941 1585635 kubeadm.go:310] [mark-control-plane] Marking the node addons-849486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 22:32:19.428297 1585635 kubeadm.go:310] [bootstrap-token] Using token: 3yv69l.mr5spb70c478inpl
	I0731 22:32:19.430472 1585635 out.go:204]   - Configuring RBAC rules ...
	I0731 22:32:19.430621 1585635 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 22:32:19.441056 1585635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 22:32:19.448280 1585635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 22:32:19.451734 1585635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 22:32:19.454948 1585635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 22:32:19.458353 1585635 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 22:32:19.764407 1585635 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 22:32:20.211704 1585635 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 22:32:20.764079 1585635 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 22:32:20.765410 1585635 kubeadm.go:310] 
	I0731 22:32:20.765484 1585635 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 22:32:20.765496 1585635 kubeadm.go:310] 
	I0731 22:32:20.765572 1585635 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 22:32:20.765579 1585635 kubeadm.go:310] 
	I0731 22:32:20.765603 1585635 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 22:32:20.765663 1585635 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 22:32:20.765716 1585635 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 22:32:20.765724 1585635 kubeadm.go:310] 
	I0731 22:32:20.765776 1585635 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 22:32:20.765782 1585635 kubeadm.go:310] 
	I0731 22:32:20.765828 1585635 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 22:32:20.765836 1585635 kubeadm.go:310] 
	I0731 22:32:20.765886 1585635 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 22:32:20.765961 1585635 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 22:32:20.766030 1585635 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 22:32:20.766038 1585635 kubeadm.go:310] 
	I0731 22:32:20.766119 1585635 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 22:32:20.766195 1585635 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 22:32:20.766203 1585635 kubeadm.go:310] 
	I0731 22:32:20.766284 1585635 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3yv69l.mr5spb70c478inpl \
	I0731 22:32:20.766386 1585635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6aeac36715e45fc93d88e018ede78e85fe5c7ed540db2ea7e85e78caf89de8d9 \
	I0731 22:32:20.766410 1585635 kubeadm.go:310] 	--control-plane 
	I0731 22:32:20.766418 1585635 kubeadm.go:310] 
	I0731 22:32:20.766517 1585635 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 22:32:20.766527 1585635 kubeadm.go:310] 
	I0731 22:32:20.766606 1585635 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3yv69l.mr5spb70c478inpl \
	I0731 22:32:20.766707 1585635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6aeac36715e45fc93d88e018ede78e85fe5c7ed540db2ea7e85e78caf89de8d9 
	I0731 22:32:20.769407 1585635 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-aws\n", err: exit status 1
	I0731 22:32:20.769524 1585635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 22:32:20.769546 1585635 cni.go:84] Creating CNI manager for ""
	I0731 22:32:20.769558 1585635 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 22:32:20.772007 1585635 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 22:32:20.774312 1585635 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 22:32:20.778116 1585635 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 22:32:20.778172 1585635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0731 22:32:20.797522 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 22:32:21.076907 1585635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 22:32:21.077037 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:21.077157 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-849486 minikube.k8s.io/updated_at=2024_07_31T22_32_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=addons-849486 minikube.k8s.io/primary=true
	I0731 22:32:21.085488 1585635 ops.go:34] apiserver oom_adj: -16
	I0731 22:32:21.206192 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:21.706478 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:22.206715 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:22.706393 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:23.206724 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:23.707089 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:24.206489 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:24.706888 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:25.206323 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:25.707128 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:26.206522 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:26.706266 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:27.206558 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:27.706844 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:28.206638 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:28.706424 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:29.206677 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:29.706655 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:30.206950 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:30.706366 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:31.207218 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:31.706707 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:32.206876 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:32.707145 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:33.207037 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:33.706961 1585635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:32:33.797367 1585635 kubeadm.go:1113] duration metric: took 12.720372697s to wait for elevateKubeSystemPrivileges
	I0731 22:32:33.797407 1585635 kubeadm.go:394] duration metric: took 28.534200707s to StartCluster
	I0731 22:32:33.797426 1585635 settings.go:142] acquiring lock: {Name:mk3c0c3b857f6d982767b7eb95481d3e4843baa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:33.797541 1585635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 22:32:33.797913 1585635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/kubeconfig: {Name:mkfef6e38d1ebcc45fcbbe766a2ae2945f7bd392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:32:33.798103 1585635 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:32:33.798203 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 22:32:33.798456 1585635 config.go:182] Loaded profile config "addons-849486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:32:33.798495 1585635 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 22:32:33.798571 1585635 addons.go:69] Setting yakd=true in profile "addons-849486"
	I0731 22:32:33.798592 1585635 addons.go:234] Setting addon yakd=true in "addons-849486"
	I0731 22:32:33.798614 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.799063 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.799199 1585635 addons.go:69] Setting inspektor-gadget=true in profile "addons-849486"
	I0731 22:32:33.799218 1585635 addons.go:234] Setting addon inspektor-gadget=true in "addons-849486"
	I0731 22:32:33.799241 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.799589 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.799901 1585635 addons.go:69] Setting metrics-server=true in profile "addons-849486"
	I0731 22:32:33.799960 1585635 addons.go:234] Setting addon metrics-server=true in "addons-849486"
	I0731 22:32:33.799997 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.800416 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.801826 1585635 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-849486"
	I0731 22:32:33.801857 1585635 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-849486"
	I0731 22:32:33.801887 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.802264 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.803496 1585635 addons.go:69] Setting cloud-spanner=true in profile "addons-849486"
	I0731 22:32:33.803539 1585635 addons.go:234] Setting addon cloud-spanner=true in "addons-849486"
	I0731 22:32:33.805393 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.806361 1585635 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-849486"
	I0731 22:32:33.806426 1585635 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-849486"
	I0731 22:32:33.806460 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.806834 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.807114 1585635 addons.go:69] Setting registry=true in profile "addons-849486"
	I0731 22:32:33.807150 1585635 addons.go:234] Setting addon registry=true in "addons-849486"
	I0731 22:32:33.807176 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.807600 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.811465 1585635 addons.go:69] Setting default-storageclass=true in profile "addons-849486"
	I0731 22:32:33.811556 1585635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-849486"
	I0731 22:32:33.811906 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.823330 1585635 addons.go:69] Setting storage-provisioner=true in profile "addons-849486"
	I0731 22:32:33.823389 1585635 addons.go:234] Setting addon storage-provisioner=true in "addons-849486"
	I0731 22:32:33.823547 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.823988 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.826550 1585635 addons.go:69] Setting gcp-auth=true in profile "addons-849486"
	I0731 22:32:33.826628 1585635 mustload.go:65] Loading cluster: addons-849486
	I0731 22:32:33.826827 1585635 config.go:182] Loaded profile config "addons-849486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:32:33.827119 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.836753 1585635 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-849486"
	I0731 22:32:33.836807 1585635 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-849486"
	I0731 22:32:33.837146 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.837281 1585635 addons.go:69] Setting ingress=true in profile "addons-849486"
	I0731 22:32:33.837309 1585635 addons.go:234] Setting addon ingress=true in "addons-849486"
	I0731 22:32:33.837348 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.837706 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.856144 1585635 addons.go:69] Setting ingress-dns=true in profile "addons-849486"
	I0731 22:32:33.856196 1585635 addons.go:234] Setting addon ingress-dns=true in "addons-849486"
	I0731 22:32:33.856243 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.856703 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.867285 1585635 addons.go:69] Setting volcano=true in profile "addons-849486"
	I0731 22:32:33.867333 1585635 addons.go:234] Setting addon volcano=true in "addons-849486"
	I0731 22:32:33.867372 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.867815 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.883202 1585635 addons.go:69] Setting volumesnapshots=true in profile "addons-849486"
	I0731 22:32:33.883298 1585635 addons.go:234] Setting addon volumesnapshots=true in "addons-849486"
	I0731 22:32:33.883368 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:33.884601 1585635 out.go:177] * Verifying Kubernetes components...
	I0731 22:32:33.893878 1585635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:32:33.927873 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.951805 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:33.975874 1585635 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 22:32:33.977750 1585635 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 22:32:33.977811 1585635 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 22:32:33.977907 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:33.994407 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 22:32:33.997276 1585635 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 22:32:33.998919 1585635 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 22:32:33.999023 1585635 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 22:32:33.999471 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 22:32:33.999488 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 22:32:33.999561 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:33.999772 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 22:32:34.002222 1585635 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 22:32:34.002244 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 22:32:34.002314 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.023776 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 22:32:34.026094 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 22:32:34.028229 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 22:32:34.030323 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 22:32:34.032822 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 22:32:34.035068 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 22:32:34.037390 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 22:32:34.037425 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 22:32:34.037506 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.039156 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:34.043166 1585635 addons.go:234] Setting addon default-storageclass=true in "addons-849486"
	I0731 22:32:34.043251 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:34.043729 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:34.051923 1585635 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 22:32:34.051950 1585635 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 22:32:34.052026 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.069351 1585635 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 22:32:34.069413 1585635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 22:32:34.071845 1585635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:32:34.071870 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 22:32:34.071952 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.078672 1585635 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 22:32:34.081054 1585635 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 22:32:34.081078 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 22:32:34.081165 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.090112 1585635 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 22:32:34.093595 1585635 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 22:32:34.093621 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 22:32:34.093688 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.128027 1585635 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-849486"
	I0731 22:32:34.128069 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:34.128484 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	W0731 22:32:34.157774 1585635 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0731 22:32:34.201020 1585635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 22:32:34.203746 1585635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 22:32:34.206023 1585635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 22:32:34.209680 1585635 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 22:32:34.209742 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 22:32:34.209823 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.236162 1585635 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 22:32:34.242043 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 22:32:34.242073 1585635 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 22:32:34.242146 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.256155 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.260528 1585635 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 22:32:34.265516 1585635 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 22:32:34.265536 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 22:32:34.265601 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.267359 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.268059 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.270811 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.281631 1585635 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 22:32:34.281653 1585635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 22:32:34.281730 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.302517 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.319770 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.320277 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.320680 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.323518 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 22:32:34.323697 1585635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:32:34.331362 1585635 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 22:32:34.333354 1585635 out.go:177]   - Using image docker.io/busybox:stable
	I0731 22:32:34.337085 1585635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 22:32:34.337161 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 22:32:34.337227 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:34.365096 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.389868 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.407286 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	W0731 22:32:34.413237 1585635 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0731 22:32:34.413267 1585635 retry.go:31] will retry after 290.879268ms: ssh: handshake failed: EOF
	I0731 22:32:34.420781 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:34.421198 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	W0731 22:32:34.422449 1585635 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0731 22:32:34.422470 1585635 retry.go:31] will retry after 184.831013ms: ssh: handshake failed: EOF
	I0731 22:32:34.614002 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 22:32:34.614029 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 22:32:34.673256 1585635 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 22:32:34.673283 1585635 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 22:32:34.681953 1585635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 22:32:34.681982 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 22:32:34.686912 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 22:32:34.702030 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 22:32:34.702058 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 22:32:34.707138 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 22:32:34.707164 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 22:32:34.753241 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 22:32:34.753266 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 22:32:34.756662 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:32:34.766426 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 22:32:34.782449 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 22:32:34.782481 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 22:32:34.796896 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 22:32:34.800880 1585635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 22:32:34.800907 1585635 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 22:32:34.864007 1585635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 22:32:34.864035 1585635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 22:32:34.865845 1585635 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 22:32:34.865881 1585635 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 22:32:34.879524 1585635 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 22:32:34.879565 1585635 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 22:32:34.890594 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 22:32:34.962585 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 22:32:34.962629 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 22:32:34.994175 1585635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 22:32:34.994204 1585635 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 22:32:35.029336 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 22:32:35.029374 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 22:32:35.041001 1585635 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 22:32:35.041034 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 22:32:35.064773 1585635 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 22:32:35.064803 1585635 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 22:32:35.140033 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 22:32:35.143482 1585635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 22:32:35.143509 1585635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 22:32:35.172282 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 22:32:35.200211 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 22:32:35.200257 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 22:32:35.205554 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 22:32:35.205592 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 22:32:35.240681 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 22:32:35.257775 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 22:32:35.285336 1585635 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 22:32:35.285370 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 22:32:35.304250 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 22:32:35.304281 1585635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 22:32:35.318376 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 22:32:35.318403 1585635 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 22:32:35.364038 1585635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 22:32:35.364066 1585635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 22:32:35.458941 1585635 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 22:32:35.458967 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 22:32:35.466795 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 22:32:35.471415 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 22:32:35.471442 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 22:32:35.535626 1585635 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 22:32:35.535654 1585635 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 22:32:35.637242 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 22:32:35.643247 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 22:32:35.643276 1585635 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 22:32:35.695922 1585635 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 22:32:35.695948 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 22:32:35.863818 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 22:32:35.863852 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 22:32:35.925947 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 22:32:36.052850 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 22:32:36.052881 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 22:32:36.186774 1585635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 22:32:36.186806 1585635 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 22:32:36.307358 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 22:32:36.826362 1585635 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.502637533s)
	I0731 22:32:36.827377 1585635 node_ready.go:35] waiting up to 6m0s for node "addons-849486" to be "Ready" ...
	I0731 22:32:36.827599 1585635 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.504059706s)
	I0731 22:32:36.827619 1585635 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0731 22:32:36.917280 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.230329672s)
	I0731 22:32:37.635839 1585635 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-849486" context rescaled to 1 replicas
	I0731 22:32:38.971338 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:39.608209 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.851496424s)
	I0731 22:32:40.730281 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.963816204s)
	I0731 22:32:40.730373 1585635 addons.go:475] Verifying addon ingress=true in "addons-849486"
	I0731 22:32:40.730796 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.933872607s)
	I0731 22:32:40.730979 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.84035121s)
	I0731 22:32:40.731031 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.590974926s)
	I0731 22:32:40.731087 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.558779907s)
	I0731 22:32:40.731096 1585635 addons.go:475] Verifying addon metrics-server=true in "addons-849486"
	I0731 22:32:40.731151 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.490416885s)
	I0731 22:32:40.731160 1585635 addons.go:475] Verifying addon registry=true in "addons-849486"
	I0731 22:32:40.731523 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.473719263s)
	I0731 22:32:40.731711 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.264880426s)
	I0731 22:32:40.731839 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.094563053s)
	I0731 22:32:40.731922 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.805898409s)
	W0731 22:32:40.732689 1585635 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 22:32:40.732716 1585635 retry.go:31] will retry after 192.341389ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 22:32:40.733872 1585635 out.go:177] * Verifying ingress addon...
	I0731 22:32:40.735147 1585635 out.go:177] * Verifying registry addon...
	I0731 22:32:40.735166 1585635 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-849486 service yakd-dashboard -n yakd-dashboard
	
	I0731 22:32:40.738239 1585635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 22:32:40.739205 1585635 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 22:32:40.759027 1585635 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 22:32:40.759090 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:40.765744 1585635 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 22:32:40.765818 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0731 22:32:40.783684 1585635 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0731 22:32:40.925870 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 22:32:41.262167 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:41.264940 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:41.386966 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:41.564556 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.257136217s)
	I0731 22:32:41.564648 1585635 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-849486"
	I0731 22:32:41.567103 1585635 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 22:32:41.571370 1585635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 22:32:41.622859 1585635 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 22:32:41.622933 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:41.744383 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:41.745012 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:42.077278 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:42.252256 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:42.255666 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:42.318415 1585635 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 22:32:42.318505 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:42.352276 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:42.481330 1585635 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 22:32:42.515983 1585635 addons.go:234] Setting addon gcp-auth=true in "addons-849486"
	I0731 22:32:42.516037 1585635 host.go:66] Checking if "addons-849486" exists ...
	I0731 22:32:42.516494 1585635 cli_runner.go:164] Run: docker container inspect addons-849486 --format={{.State.Status}}
	I0731 22:32:42.544941 1585635 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 22:32:42.545014 1585635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-849486
	I0731 22:32:42.568744 1585635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/addons-849486/id_rsa Username:docker}
	I0731 22:32:42.577260 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:42.743056 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:42.745599 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:43.077148 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:43.245121 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:43.246131 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:43.576555 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:43.769034 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:43.769994 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:43.844871 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:44.076871 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:44.177742 1585635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.251828028s)
	I0731 22:32:44.177855 1585635 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.632877013s)
	I0731 22:32:44.180295 1585635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 22:32:44.182445 1585635 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 22:32:44.184829 1585635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 22:32:44.184884 1585635 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 22:32:44.212759 1585635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 22:32:44.212837 1585635 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 22:32:44.231550 1585635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 22:32:44.231669 1585635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 22:32:44.245781 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:44.246221 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:44.256311 1585635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 22:32:44.576248 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:44.753951 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:44.755250 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:44.877627 1585635 addons.go:475] Verifying addon gcp-auth=true in "addons-849486"
	I0731 22:32:44.880009 1585635 out.go:177] * Verifying gcp-auth addon...
	I0731 22:32:44.883172 1585635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 22:32:44.887849 1585635 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 22:32:44.887919 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:45.090712 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:45.247135 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:45.249137 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:45.389389 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:45.576034 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:45.743912 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:45.744141 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:45.887236 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:46.076234 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:46.242659 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:46.244638 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:46.330631 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:46.387466 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:46.575637 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:46.744520 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:46.745383 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:46.887579 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:47.075957 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:47.243562 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:47.243715 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:47.386486 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:47.575429 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:47.742886 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:47.743102 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:47.886745 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:48.076022 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:48.243277 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:48.244229 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:48.386624 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:48.575845 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:48.742319 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:48.744367 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:48.830378 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:48.886983 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:49.076063 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:49.242544 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:49.243638 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:49.386885 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:49.575976 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:49.743622 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:49.745009 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:49.886723 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:50.075539 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:50.243289 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:50.244039 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:50.386415 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:50.575926 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:50.742467 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:50.743541 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:50.831310 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:50.887654 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:51.075487 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:51.242267 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:51.243398 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:51.386228 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:51.576435 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:51.742838 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:51.743277 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:51.887435 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:52.076148 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:52.255669 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:52.256630 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:52.387128 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:52.575760 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:52.742006 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:52.743348 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:52.887549 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:53.076613 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:53.242422 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:53.243176 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:53.330705 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:53.386479 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:53.575225 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:53.742284 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:53.743497 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:53.886508 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:54.075472 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:54.244402 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:54.243675 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:54.387198 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:54.575856 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:54.743038 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:54.744293 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:54.886424 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:55.075975 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:55.244357 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:55.244381 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:55.386862 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:55.575991 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:55.743589 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:55.744656 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:55.830763 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:55.886517 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:56.075701 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:56.243400 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:56.244101 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:56.387194 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:56.576086 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:56.742513 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:56.743796 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:56.887311 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:57.076096 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:57.242051 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:57.243212 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:57.386682 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:57.576453 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:57.742357 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:57.744079 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:57.831382 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:32:57.886971 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:58.076338 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:58.242299 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:58.243475 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:58.387433 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:58.575633 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:58.742103 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:58.742999 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:58.886662 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:59.075576 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:59.244900 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:59.246342 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:59.387074 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:32:59.576231 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:32:59.743378 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:32:59.742470 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:32:59.887345 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:00.087879 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:00.273190 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:00.273970 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:00.337689 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:00.387618 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:00.575811 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:00.743059 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:00.744228 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:00.886866 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:01.075840 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:01.243283 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:01.244117 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:01.387279 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:01.576499 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:01.744588 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:01.744728 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:01.886963 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:02.075985 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:02.242779 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:02.243816 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:02.386346 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:02.576019 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:02.744680 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:02.746178 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:02.831029 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:02.886942 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:03.076206 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:03.242979 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:03.244026 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:03.400313 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:03.575961 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:03.745510 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:03.746689 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:03.887426 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:04.076155 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:04.244568 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:04.245955 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:04.386830 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:04.576095 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:04.742032 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:04.742861 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:04.831212 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:04.886527 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:05.075985 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:05.243598 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:05.244272 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:05.387347 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:05.575338 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:05.743181 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:05.743911 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:05.887476 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:06.076486 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:06.242708 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:06.244169 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:06.386904 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:06.575824 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:06.743190 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:06.744352 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:06.886948 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:07.075642 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:07.243175 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:07.245563 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:07.330643 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:07.386491 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:07.579509 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:07.743657 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:07.744485 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:07.887276 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:08.075610 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:08.243705 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:08.244636 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:08.386884 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:08.575851 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:08.743386 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:08.744102 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:08.886775 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:09.076849 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:09.242494 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:09.243894 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:09.331209 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:09.387698 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:09.575800 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:09.743178 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:09.744020 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:09.886411 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:10.075588 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:10.242910 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:10.243518 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:10.387150 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:10.576509 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:10.742437 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:10.744196 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:10.886860 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:11.077525 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:11.243246 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:11.244089 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:11.332639 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:11.386948 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:11.576090 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:11.745081 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:11.747185 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:11.887156 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:12.076841 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:12.244185 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:12.244480 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:12.387200 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:12.576262 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:12.743793 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:12.745368 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:12.887560 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:13.075825 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:13.243062 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:13.243792 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:13.386950 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:13.575512 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:13.743698 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:13.744601 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:13.831057 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:13.887284 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:14.075720 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:14.243561 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:14.243997 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:14.386532 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:14.575816 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:14.743331 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:14.744045 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:14.887434 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:15.075930 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:15.243695 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:15.244257 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:15.386880 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:15.576057 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:15.742325 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:15.743569 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:15.887364 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:16.076223 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:16.242778 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:16.243580 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:16.330601 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:16.387101 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:16.575683 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:16.741916 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:16.743935 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:16.887176 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:17.076280 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:17.242445 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:17.243460 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:17.386437 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:17.576205 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:17.741908 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:17.743366 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:17.886588 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:18.076795 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:18.242934 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:18.243862 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:18.331565 1585635 node_ready.go:53] node "addons-849486" has status "Ready":"False"
	I0731 22:33:18.387366 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:18.575888 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:18.743095 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:18.743620 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:18.886759 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:19.075983 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:19.243492 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:19.244248 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:19.386384 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:19.575306 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:19.742296 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:19.743036 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:19.886512 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:20.095710 1585635 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 22:33:20.095746 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:20.272166 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:20.272761 1585635 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 22:33:20.272774 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:20.396236 1585635 node_ready.go:49] node "addons-849486" has status "Ready":"True"
	I0731 22:33:20.396272 1585635 node_ready.go:38] duration metric: took 43.568862212s for node "addons-849486" to be "Ready" ...
	I0731 22:33:20.396284 1585635 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:33:20.460926 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:20.465502 1585635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qv2pm" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:20.627876 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:20.806883 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:20.807874 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:20.897095 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:21.077626 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:21.247125 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:21.249917 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:21.386977 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:21.576397 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:21.742968 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:21.744283 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:21.887461 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:21.974482 1585635 pod_ready.go:92] pod "coredns-7db6d8ff4d-qv2pm" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:21.974507 1585635 pod_ready.go:81] duration metric: took 1.508971834s for pod "coredns-7db6d8ff4d-qv2pm" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.974533 1585635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.980872 1585635 pod_ready.go:92] pod "etcd-addons-849486" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:21.980896 1585635 pod_ready.go:81] duration metric: took 6.356016ms for pod "etcd-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.980910 1585635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.987115 1585635 pod_ready.go:92] pod "kube-apiserver-addons-849486" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:21.987147 1585635 pod_ready.go:81] duration metric: took 6.229673ms for pod "kube-apiserver-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.987158 1585635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.992769 1585635 pod_ready.go:92] pod "kube-controller-manager-addons-849486" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:21.992794 1585635 pod_ready.go:81] duration metric: took 5.628051ms for pod "kube-controller-manager-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.992806 1585635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mxw62" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.998987 1585635 pod_ready.go:92] pod "kube-proxy-mxw62" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:21.999021 1585635 pod_ready.go:81] duration metric: took 6.207208ms for pod "kube-proxy-mxw62" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:21.999031 1585635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:22.080257 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:22.245906 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:22.247278 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:22.370669 1585635 pod_ready.go:92] pod "kube-scheduler-addons-849486" in "kube-system" namespace has status "Ready":"True"
	I0731 22:33:22.370696 1585635 pod_ready.go:81] duration metric: took 371.657346ms for pod "kube-scheduler-addons-849486" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:22.370708 1585635 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace to be "Ready" ...
	I0731 22:33:22.387294 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:22.577295 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:22.743926 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:22.746520 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:22.887355 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:23.077307 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:23.244498 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:23.247478 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:23.389515 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:23.580542 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:23.751087 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:23.752014 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:23.887061 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:24.084325 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:24.253062 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:24.254305 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:24.378689 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:24.419268 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:24.578345 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:24.747327 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:24.748370 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:24.888070 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:25.080909 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:25.243772 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:25.245844 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:25.387252 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:25.576849 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:25.743092 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:25.744783 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:25.886616 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:26.078207 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:26.245273 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:26.245784 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:26.387173 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:26.577887 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:26.744918 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:26.752885 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:26.877086 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:26.887302 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:27.078036 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:27.243523 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:27.245357 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:27.386897 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:27.578774 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:27.744424 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:27.744799 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:27.887110 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:28.078521 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:28.243167 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:28.245480 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:28.387114 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:28.577475 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:28.744241 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:28.745120 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:28.887164 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:29.080269 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:29.246372 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:29.247455 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:29.381194 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:29.388617 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:29.582722 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:29.743822 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:29.748171 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:29.888787 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:30.078620 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:30.244281 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:30.245613 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:30.387214 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:30.576633 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:30.743549 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:30.744903 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:30.886501 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:31.077652 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:31.244901 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:31.245554 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:31.387121 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:31.577010 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:31.743759 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:31.747394 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:31.876912 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:31.887678 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:32.077050 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:32.244646 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:32.245875 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:32.391848 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:32.577464 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:32.744055 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:32.745355 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:32.887249 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:33.077781 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:33.243618 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:33.246059 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:33.386794 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:33.576870 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:33.746127 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:33.748896 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:33.878101 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:33.888168 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:34.084976 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:34.244939 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:34.249516 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:34.388026 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:34.578277 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:34.743193 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:34.747468 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:34.888315 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:35.083949 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:35.245861 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:35.246727 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:35.387305 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:35.593815 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:35.747984 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:35.748986 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:35.887755 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:36.078258 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:36.244080 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:36.245289 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:36.377424 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:36.386843 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:36.579585 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:36.757645 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:36.759630 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:36.886308 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:37.077329 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:37.243782 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:37.245017 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:37.386846 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:37.580954 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:37.745280 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:37.746815 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:37.887312 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:38.078610 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:38.243602 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:38.244574 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:38.387100 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:38.577481 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:38.744334 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:38.745285 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:38.876027 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:38.887341 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:39.082444 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:39.246206 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:39.246850 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:39.387153 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:39.577282 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:39.762239 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:39.766300 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:39.890739 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:40.081500 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:40.255171 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:40.256862 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:40.394848 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:40.578235 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:40.757451 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:40.758364 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:40.879216 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:40.886521 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:41.086198 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:41.247832 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:41.249566 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:41.388319 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:41.579938 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:41.746386 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:41.749680 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:41.888087 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:42.099034 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:42.266590 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:42.270920 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:42.409245 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:42.577777 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:42.758638 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:42.760338 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:42.882459 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:42.887872 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:43.078741 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:43.249823 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:43.251616 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:43.390288 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:43.578025 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:43.745801 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:43.747420 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:43.887219 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:44.077598 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:44.243842 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:44.247979 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:44.386914 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:44.578731 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:44.745658 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:44.747530 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:44.886897 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:45.083802 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:45.244975 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:45.246087 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:45.377320 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:45.387475 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:45.577783 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:45.743519 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:45.745987 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:45.887224 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:46.077779 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:46.244599 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:46.245361 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:46.386801 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:46.578011 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:46.743926 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:46.744775 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:46.888676 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:47.077405 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:47.243705 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:47.248275 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:47.387013 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:47.578263 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:47.746441 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:47.746699 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:47.891906 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:47.892804 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:48.077962 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:48.246465 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:48.249008 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:48.386748 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:48.577898 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:48.753817 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:48.755514 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:48.887274 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:49.077991 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:49.243959 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:49.245377 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:49.387024 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:49.577246 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:49.744021 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:49.744449 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:49.887214 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:50.076850 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:50.249775 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:50.250084 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:50.395278 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:50.407223 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:50.578870 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:50.746257 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:50.747170 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:50.887341 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:51.079443 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:51.249375 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:51.255737 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:51.387901 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:51.577271 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:51.744139 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:51.747638 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:51.905706 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:52.077754 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:52.244108 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:52.245651 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:52.387550 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:52.577451 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:52.744650 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:52.745692 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:52.877516 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:52.889026 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:53.079252 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:53.244853 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:53.246222 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:53.390921 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:53.577242 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:53.745558 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:53.745778 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:53.887893 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:54.081987 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:54.243628 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:54.245938 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:54.388640 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:54.578026 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:54.749012 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:54.754160 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:54.879380 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:54.888296 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:55.078667 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:55.245214 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:55.246513 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:55.387078 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:55.578115 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:55.745488 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:55.748318 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:55.887164 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:56.095720 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:56.244702 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:56.247344 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:56.387431 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:56.580411 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:56.742812 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:56.745754 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:56.887178 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:57.078023 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:57.243212 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:57.244502 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:57.376179 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:57.386721 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:57.576694 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:57.742776 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:57.744843 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:57.886977 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:58.078118 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:58.245998 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:58.246682 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:58.386621 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:58.579787 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:58.745252 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:58.746683 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:58.887342 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:59.078292 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:59.244286 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:59.247592 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:59.378413 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:33:59.387145 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:33:59.578800 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:33:59.745726 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:33:59.748974 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:33:59.886682 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:00.103141 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:00.360787 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:00.363278 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:00.407092 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:00.578874 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:00.747020 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:00.748324 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:00.887554 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:01.081192 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:01.249354 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:01.250275 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:01.379636 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:01.389170 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:01.577439 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:01.743226 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:01.744329 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:01.886815 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:02.077273 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:02.243186 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:02.244753 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:02.386406 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:02.577224 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:02.743960 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:02.744663 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:02.887401 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:03.078809 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:03.252578 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:03.253308 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:03.387405 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:03.577870 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:03.745659 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:03.747052 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:03.880391 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:03.888620 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:04.077598 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:04.265932 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:04.275088 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:04.391130 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:04.577893 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:04.747491 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:04.749681 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:04.888781 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:05.079549 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:05.245548 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:05.250589 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:05.390843 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:05.578271 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:05.746769 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:05.748129 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:05.887525 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:06.077692 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:06.244508 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:06.246244 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:06.377529 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:06.387112 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:06.578014 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:06.744378 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:06.745231 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:06.886348 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:07.087949 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:07.244094 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:07.249364 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:07.386605 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:07.577593 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:07.743959 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:07.753452 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:07.887784 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:08.079826 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:08.246408 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:08.249241 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:08.382278 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:08.391012 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:08.582697 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:08.748872 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:08.750227 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:08.887277 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:09.078473 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:09.259490 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:09.259856 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:09.386845 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:09.579861 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:09.753611 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:09.755559 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:09.892274 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:10.078945 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:10.246135 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:10.250377 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:10.387078 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:10.582028 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:10.759973 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:10.764765 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:10.877904 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:10.887396 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:11.078420 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:11.245171 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:11.250738 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:11.386725 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:11.579585 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:11.744153 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:11.744694 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 22:34:11.887788 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:12.078732 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:12.243939 1585635 kapi.go:107] duration metric: took 1m31.50569722s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 22:34:12.244850 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:12.386822 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:12.576558 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:12.743462 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:12.878041 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:12.890117 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:13.078945 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:13.244745 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:13.389814 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:13.577580 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:13.745491 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:13.887135 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:14.077444 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:14.244445 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:14.387690 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:14.578445 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:14.744503 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:14.894823 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:14.902050 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:15.078810 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:15.249215 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:15.408789 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:15.579618 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:15.744159 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:15.887958 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:16.077378 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:16.244254 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:16.387034 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:16.578252 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:16.746126 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:16.887598 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:17.094901 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:17.244349 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:17.379366 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:17.393075 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:17.577646 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:17.744812 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:17.891206 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:18.078720 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:18.244191 1585635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 22:34:18.388797 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:18.577646 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:18.744548 1585635 kapi.go:107] duration metric: took 1m38.005338482s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 22:34:18.887930 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:19.078019 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:19.382525 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:19.388757 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:19.577915 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:19.886686 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:20.084112 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:20.401373 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:20.581997 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:20.886944 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:21.077623 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:21.393946 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:21.576996 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:21.876730 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:21.886784 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:22.077355 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:22.387496 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:22.586340 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:22.887451 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:23.079870 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:23.390044 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 22:34:23.576931 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:23.877143 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:23.886639 1585635 kapi.go:107] duration metric: took 1m39.003466373s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 22:34:23.889356 1585635 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-849486 cluster.
	I0731 22:34:23.891944 1585635 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 22:34:23.894273 1585635 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 22:34:24.078145 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:24.579009 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:25.077843 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:25.578009 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:26.076640 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:26.376624 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:26.577495 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:27.077960 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:27.577263 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:28.076730 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:28.376693 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:28.577671 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:29.077480 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:29.577326 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:30.077643 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:30.379062 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:30.576665 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:31.080045 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:31.577524 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:32.078310 1585635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 22:34:32.577531 1585635 kapi.go:107] duration metric: took 1m51.006159376s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 22:34:32.580434 1585635 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0731 22:34:32.583147 1585635 addons.go:510] duration metric: took 1m58.78463683s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0731 22:34:32.877638 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:35.377360 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:37.377692 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:39.877811 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:42.377875 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:44.877299 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:46.877651 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:49.376702 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:51.376786 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:53.377146 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:55.876547 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:57.876671 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:34:59.877192 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:35:01.878087 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:35:04.377293 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:35:06.876971 1585635 pod_ready.go:102] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"False"
	I0731 22:35:08.377686 1585635 pod_ready.go:92] pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:08.377720 1585635 pod_ready.go:81] duration metric: took 1m46.007003338s for pod "metrics-server-c59844bb4-vlxmw" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:08.377732 1585635 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tjbj7" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:08.383583 1585635 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-tjbj7" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:08.383609 1585635 pod_ready.go:81] duration metric: took 5.867968ms for pod "nvidia-device-plugin-daemonset-tjbj7" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:08.383631 1585635 pod_ready.go:38] duration metric: took 1m47.98733424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:35:08.383648 1585635 api_server.go:52] waiting for apiserver process to appear ...
	I0731 22:35:08.384243 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 22:35:08.384320 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 22:35:08.434388 1585635 cri.go:89] found id: "8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:08.434451 1585635 cri.go:89] found id: ""
	I0731 22:35:08.434472 1585635 logs.go:276] 1 containers: [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc]
	I0731 22:35:08.434561 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.438977 1585635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 22:35:08.439093 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 22:35:08.481395 1585635 cri.go:89] found id: "5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:08.481420 1585635 cri.go:89] found id: ""
	I0731 22:35:08.481428 1585635 logs.go:276] 1 containers: [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699]
	I0731 22:35:08.481508 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.485164 1585635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 22:35:08.485250 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 22:35:08.529715 1585635 cri.go:89] found id: "033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:08.529738 1585635 cri.go:89] found id: ""
	I0731 22:35:08.529761 1585635 logs.go:276] 1 containers: [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b]
	I0731 22:35:08.529848 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.533519 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 22:35:08.533594 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 22:35:08.573691 1585635 cri.go:89] found id: "43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:08.573715 1585635 cri.go:89] found id: ""
	I0731 22:35:08.573723 1585635 logs.go:276] 1 containers: [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d]
	I0731 22:35:08.573811 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.577485 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 22:35:08.577611 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 22:35:08.621799 1585635 cri.go:89] found id: "6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:08.621823 1585635 cri.go:89] found id: ""
	I0731 22:35:08.621831 1585635 logs.go:276] 1 containers: [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc]
	I0731 22:35:08.621912 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.625761 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 22:35:08.625850 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 22:35:08.666050 1585635 cri.go:89] found id: "f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:08.666070 1585635 cri.go:89] found id: ""
	I0731 22:35:08.666079 1585635 logs.go:276] 1 containers: [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf]
	I0731 22:35:08.666133 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.669582 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 22:35:08.669692 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 22:35:08.707843 1585635 cri.go:89] found id: "2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:08.707864 1585635 cri.go:89] found id: ""
	I0731 22:35:08.707872 1585635 logs.go:276] 1 containers: [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757]
	I0731 22:35:08.707936 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:08.711678 1585635 logs.go:123] Gathering logs for kubelet ...
	I0731 22:35:08.711707 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 22:35:08.794103 1585635 logs.go:123] Gathering logs for coredns [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b] ...
	I0731 22:35:08.794143 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:08.848313 1585635 logs.go:123] Gathering logs for etcd [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699] ...
	I0731 22:35:08.848342 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:08.901253 1585635 logs.go:123] Gathering logs for kube-scheduler [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d] ...
	I0731 22:35:08.901286 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:08.951128 1585635 logs.go:123] Gathering logs for kube-proxy [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc] ...
	I0731 22:35:08.951166 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:08.990548 1585635 logs.go:123] Gathering logs for kube-controller-manager [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf] ...
	I0731 22:35:08.990576 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:09.064518 1585635 logs.go:123] Gathering logs for kindnet [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757] ...
	I0731 22:35:09.064561 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:09.110247 1585635 logs.go:123] Gathering logs for dmesg ...
	I0731 22:35:09.110285 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 22:35:09.134555 1585635 logs.go:123] Gathering logs for describe nodes ...
	I0731 22:35:09.134631 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 22:35:09.325410 1585635 logs.go:123] Gathering logs for kube-apiserver [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc] ...
	I0731 22:35:09.325503 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:09.379782 1585635 logs.go:123] Gathering logs for CRI-O ...
	I0731 22:35:09.379819 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 22:35:09.474280 1585635 logs.go:123] Gathering logs for container status ...
	I0731 22:35:09.474317 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 22:35:12.026789 1585635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:35:12.042395 1585635 api_server.go:72] duration metric: took 2m38.244254871s to wait for apiserver process to appear ...
	I0731 22:35:12.042421 1585635 api_server.go:88] waiting for apiserver healthz status ...
	I0731 22:35:12.042461 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 22:35:12.042524 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 22:35:12.095037 1585635 cri.go:89] found id: "8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:12.095057 1585635 cri.go:89] found id: ""
	I0731 22:35:12.095065 1585635 logs.go:276] 1 containers: [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc]
	I0731 22:35:12.095155 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.099480 1585635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 22:35:12.099560 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 22:35:12.143149 1585635 cri.go:89] found id: "5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:12.143170 1585635 cri.go:89] found id: ""
	I0731 22:35:12.143178 1585635 logs.go:276] 1 containers: [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699]
	I0731 22:35:12.143245 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.146914 1585635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 22:35:12.146984 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 22:35:12.207387 1585635 cri.go:89] found id: "033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:12.207409 1585635 cri.go:89] found id: ""
	I0731 22:35:12.207416 1585635 logs.go:276] 1 containers: [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b]
	I0731 22:35:12.207472 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.210978 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 22:35:12.211052 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 22:35:12.249122 1585635 cri.go:89] found id: "43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:12.249185 1585635 cri.go:89] found id: ""
	I0731 22:35:12.249215 1585635 logs.go:276] 1 containers: [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d]
	I0731 22:35:12.249302 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.253219 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 22:35:12.253312 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 22:35:12.291887 1585635 cri.go:89] found id: "6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:12.291909 1585635 cri.go:89] found id: ""
	I0731 22:35:12.291917 1585635 logs.go:276] 1 containers: [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc]
	I0731 22:35:12.291979 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.295522 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 22:35:12.295605 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 22:35:12.355490 1585635 cri.go:89] found id: "f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:12.355552 1585635 cri.go:89] found id: ""
	I0731 22:35:12.355575 1585635 logs.go:276] 1 containers: [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf]
	I0731 22:35:12.355643 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.359565 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 22:35:12.359644 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 22:35:12.401208 1585635 cri.go:89] found id: "2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:12.401293 1585635 cri.go:89] found id: ""
	I0731 22:35:12.401322 1585635 logs.go:276] 1 containers: [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757]
	I0731 22:35:12.401390 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:12.404965 1585635 logs.go:123] Gathering logs for CRI-O ...
	I0731 22:35:12.404987 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 22:35:12.501081 1585635 logs.go:123] Gathering logs for dmesg ...
	I0731 22:35:12.501124 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 22:35:12.521644 1585635 logs.go:123] Gathering logs for describe nodes ...
	I0731 22:35:12.521687 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 22:35:12.669511 1585635 logs.go:123] Gathering logs for etcd [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699] ...
	I0731 22:35:12.669544 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:12.725707 1585635 logs.go:123] Gathering logs for kube-proxy [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc] ...
	I0731 22:35:12.725747 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:12.775029 1585635 logs.go:123] Gathering logs for kindnet [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757] ...
	I0731 22:35:12.775057 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:12.821262 1585635 logs.go:123] Gathering logs for container status ...
	I0731 22:35:12.821293 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 22:35:12.871603 1585635 logs.go:123] Gathering logs for kubelet ...
	I0731 22:35:12.871636 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 22:35:12.956264 1585635 logs.go:123] Gathering logs for kube-apiserver [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc] ...
	I0731 22:35:12.956300 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:13.028806 1585635 logs.go:123] Gathering logs for coredns [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b] ...
	I0731 22:35:13.028839 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:13.072609 1585635 logs.go:123] Gathering logs for kube-scheduler [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d] ...
	I0731 22:35:13.072639 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:13.126190 1585635 logs.go:123] Gathering logs for kube-controller-manager [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf] ...
	I0731 22:35:13.126221 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:15.721304 1585635 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0731 22:35:15.730591 1585635 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0731 22:35:15.731562 1585635 api_server.go:141] control plane version: v1.30.3
	I0731 22:35:15.731585 1585635 api_server.go:131] duration metric: took 3.689156807s to wait for apiserver health ...
	I0731 22:35:15.731594 1585635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 22:35:15.731615 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 22:35:15.731677 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 22:35:15.775642 1585635 cri.go:89] found id: "8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:15.775664 1585635 cri.go:89] found id: ""
	I0731 22:35:15.775673 1585635 logs.go:276] 1 containers: [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc]
	I0731 22:35:15.775731 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.779297 1585635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 22:35:15.779370 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 22:35:15.821650 1585635 cri.go:89] found id: "5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:15.821672 1585635 cri.go:89] found id: ""
	I0731 22:35:15.821680 1585635 logs.go:276] 1 containers: [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699]
	I0731 22:35:15.821735 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.825240 1585635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 22:35:15.825324 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 22:35:15.862900 1585635 cri.go:89] found id: "033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:15.862922 1585635 cri.go:89] found id: ""
	I0731 22:35:15.862930 1585635 logs.go:276] 1 containers: [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b]
	I0731 22:35:15.862989 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.866695 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 22:35:15.866771 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 22:35:15.903992 1585635 cri.go:89] found id: "43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:15.904027 1585635 cri.go:89] found id: ""
	I0731 22:35:15.904035 1585635 logs.go:276] 1 containers: [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d]
	I0731 22:35:15.904126 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.907764 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 22:35:15.907861 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 22:35:15.947029 1585635 cri.go:89] found id: "6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:15.947059 1585635 cri.go:89] found id: ""
	I0731 22:35:15.947070 1585635 logs.go:276] 1 containers: [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc]
	I0731 22:35:15.947145 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.950896 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 22:35:15.950990 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 22:35:15.990729 1585635 cri.go:89] found id: "f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:15.990753 1585635 cri.go:89] found id: ""
	I0731 22:35:15.990762 1585635 logs.go:276] 1 containers: [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf]
	I0731 22:35:15.990821 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:15.994961 1585635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 22:35:15.995033 1585635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 22:35:16.040165 1585635 cri.go:89] found id: "2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:16.040188 1585635 cri.go:89] found id: ""
	I0731 22:35:16.040195 1585635 logs.go:276] 1 containers: [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757]
	I0731 22:35:16.040255 1585635 ssh_runner.go:195] Run: which crictl
	I0731 22:35:16.043890 1585635 logs.go:123] Gathering logs for kube-apiserver [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc] ...
	I0731 22:35:16.043916 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc"
	I0731 22:35:16.098081 1585635 logs.go:123] Gathering logs for etcd [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699] ...
	I0731 22:35:16.098115 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699"
	I0731 22:35:16.143838 1585635 logs.go:123] Gathering logs for coredns [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b] ...
	I0731 22:35:16.143871 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b"
	I0731 22:35:16.187912 1585635 logs.go:123] Gathering logs for kube-controller-manager [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf] ...
	I0731 22:35:16.187942 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf"
	I0731 22:35:16.276841 1585635 logs.go:123] Gathering logs for kindnet [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757] ...
	I0731 22:35:16.276878 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757"
	I0731 22:35:16.333324 1585635 logs.go:123] Gathering logs for CRI-O ...
	I0731 22:35:16.333353 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 22:35:16.430440 1585635 logs.go:123] Gathering logs for kubelet ...
	I0731 22:35:16.430477 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 22:35:16.516088 1585635 logs.go:123] Gathering logs for dmesg ...
	I0731 22:35:16.516129 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 22:35:16.535819 1585635 logs.go:123] Gathering logs for describe nodes ...
	I0731 22:35:16.535847 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 22:35:16.670971 1585635 logs.go:123] Gathering logs for kube-scheduler [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d] ...
	I0731 22:35:16.671005 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d"
	I0731 22:35:16.724764 1585635 logs.go:123] Gathering logs for kube-proxy [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc] ...
	I0731 22:35:16.724795 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc"
	I0731 22:35:16.764416 1585635 logs.go:123] Gathering logs for container status ...
	I0731 22:35:16.764445 1585635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 22:35:19.328261 1585635 system_pods.go:59] 18 kube-system pods found
	I0731 22:35:19.328302 1585635 system_pods.go:61] "coredns-7db6d8ff4d-qv2pm" [d5cb6a71-36b0-4416-9aae-f244288db1a0] Running
	I0731 22:35:19.328308 1585635 system_pods.go:61] "csi-hostpath-attacher-0" [9777d915-8d6e-4ea4-ad27-bf9a834bd6c8] Running
	I0731 22:35:19.328313 1585635 system_pods.go:61] "csi-hostpath-resizer-0" [0871abf1-f861-4fcb-9565-833ce33eb600] Running
	I0731 22:35:19.328317 1585635 system_pods.go:61] "csi-hostpathplugin-54fjr" [41d14787-618a-4f95-99a0-30ed9a484afe] Running
	I0731 22:35:19.328321 1585635 system_pods.go:61] "etcd-addons-849486" [2f51d3a4-3ccd-4c0c-b36e-039e5b90b582] Running
	I0731 22:35:19.328325 1585635 system_pods.go:61] "kindnet-v5dmr" [36e1674f-d093-4894-a330-acf34f2d862a] Running
	I0731 22:35:19.328330 1585635 system_pods.go:61] "kube-apiserver-addons-849486" [9861d6dd-34df-48f6-989e-a5ec25987ba8] Running
	I0731 22:35:19.328365 1585635 system_pods.go:61] "kube-controller-manager-addons-849486" [86b5d477-353e-4384-850e-2fd9de19a5d5] Running
	I0731 22:35:19.328375 1585635 system_pods.go:61] "kube-ingress-dns-minikube" [93c14fc1-ba7a-4b8f-b1bb-b1a447536081] Running
	I0731 22:35:19.328379 1585635 system_pods.go:61] "kube-proxy-mxw62" [2c575b64-75f4-4f37-98e9-b2cb3f720f73] Running
	I0731 22:35:19.328383 1585635 system_pods.go:61] "kube-scheduler-addons-849486" [7407d088-a557-4b87-a488-37be78f806fd] Running
	I0731 22:35:19.328386 1585635 system_pods.go:61] "metrics-server-c59844bb4-vlxmw" [3c4a50ec-9a60-43e3-9e0c-a91793afab2d] Running
	I0731 22:35:19.328390 1585635 system_pods.go:61] "nvidia-device-plugin-daemonset-tjbj7" [04df05f4-a9ce-4b7d-a544-2ed8988a7f7d] Running
	I0731 22:35:19.328400 1585635 system_pods.go:61] "registry-698f998955-xsv4s" [505680ce-0882-4b35-957c-5038c3ef415e] Running
	I0731 22:35:19.328404 1585635 system_pods.go:61] "registry-proxy-7fzhl" [83d11338-5592-463e-b649-7ab9c5714f7d] Running
	I0731 22:35:19.328407 1585635 system_pods.go:61] "snapshot-controller-745499f584-8d8zk" [9b01ff52-2269-4019-8706-7629a243c597] Running
	I0731 22:35:19.328411 1585635 system_pods.go:61] "snapshot-controller-745499f584-bcqgs" [f7a07cca-0c8c-45e0-9347-78e13677f814] Running
	I0731 22:35:19.328420 1585635 system_pods.go:61] "storage-provisioner" [fef6ed9b-0181-4dcb-b189-cdc918ea4104] Running
	I0731 22:35:19.328442 1585635 system_pods.go:74] duration metric: took 3.59682632s to wait for pod list to return data ...
	I0731 22:35:19.328458 1585635 default_sa.go:34] waiting for default service account to be created ...
	I0731 22:35:19.330830 1585635 default_sa.go:45] found service account: "default"
	I0731 22:35:19.330855 1585635 default_sa.go:55] duration metric: took 2.389752ms for default service account to be created ...
	I0731 22:35:19.330864 1585635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 22:35:19.340704 1585635 system_pods.go:86] 18 kube-system pods found
	I0731 22:35:19.340741 1585635 system_pods.go:89] "coredns-7db6d8ff4d-qv2pm" [d5cb6a71-36b0-4416-9aae-f244288db1a0] Running
	I0731 22:35:19.340749 1585635 system_pods.go:89] "csi-hostpath-attacher-0" [9777d915-8d6e-4ea4-ad27-bf9a834bd6c8] Running
	I0731 22:35:19.340754 1585635 system_pods.go:89] "csi-hostpath-resizer-0" [0871abf1-f861-4fcb-9565-833ce33eb600] Running
	I0731 22:35:19.340759 1585635 system_pods.go:89] "csi-hostpathplugin-54fjr" [41d14787-618a-4f95-99a0-30ed9a484afe] Running
	I0731 22:35:19.340763 1585635 system_pods.go:89] "etcd-addons-849486" [2f51d3a4-3ccd-4c0c-b36e-039e5b90b582] Running
	I0731 22:35:19.340767 1585635 system_pods.go:89] "kindnet-v5dmr" [36e1674f-d093-4894-a330-acf34f2d862a] Running
	I0731 22:35:19.340771 1585635 system_pods.go:89] "kube-apiserver-addons-849486" [9861d6dd-34df-48f6-989e-a5ec25987ba8] Running
	I0731 22:35:19.340776 1585635 system_pods.go:89] "kube-controller-manager-addons-849486" [86b5d477-353e-4384-850e-2fd9de19a5d5] Running
	I0731 22:35:19.340780 1585635 system_pods.go:89] "kube-ingress-dns-minikube" [93c14fc1-ba7a-4b8f-b1bb-b1a447536081] Running
	I0731 22:35:19.340791 1585635 system_pods.go:89] "kube-proxy-mxw62" [2c575b64-75f4-4f37-98e9-b2cb3f720f73] Running
	I0731 22:35:19.340795 1585635 system_pods.go:89] "kube-scheduler-addons-849486" [7407d088-a557-4b87-a488-37be78f806fd] Running
	I0731 22:35:19.340806 1585635 system_pods.go:89] "metrics-server-c59844bb4-vlxmw" [3c4a50ec-9a60-43e3-9e0c-a91793afab2d] Running
	I0731 22:35:19.340810 1585635 system_pods.go:89] "nvidia-device-plugin-daemonset-tjbj7" [04df05f4-a9ce-4b7d-a544-2ed8988a7f7d] Running
	I0731 22:35:19.340816 1585635 system_pods.go:89] "registry-698f998955-xsv4s" [505680ce-0882-4b35-957c-5038c3ef415e] Running
	I0731 22:35:19.340823 1585635 system_pods.go:89] "registry-proxy-7fzhl" [83d11338-5592-463e-b649-7ab9c5714f7d] Running
	I0731 22:35:19.340827 1585635 system_pods.go:89] "snapshot-controller-745499f584-8d8zk" [9b01ff52-2269-4019-8706-7629a243c597] Running
	I0731 22:35:19.340831 1585635 system_pods.go:89] "snapshot-controller-745499f584-bcqgs" [f7a07cca-0c8c-45e0-9347-78e13677f814] Running
	I0731 22:35:19.340837 1585635 system_pods.go:89] "storage-provisioner" [fef6ed9b-0181-4dcb-b189-cdc918ea4104] Running
	I0731 22:35:19.340844 1585635 system_pods.go:126] duration metric: took 9.975023ms to wait for k8s-apps to be running ...
	I0731 22:35:19.340856 1585635 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 22:35:19.340929 1585635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:35:19.355574 1585635 system_svc.go:56] duration metric: took 14.70843ms WaitForService to wait for kubelet
	I0731 22:35:19.355601 1585635 kubeadm.go:582] duration metric: took 2m45.55746659s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:35:19.355623 1585635 node_conditions.go:102] verifying NodePressure condition ...
	I0731 22:35:19.358905 1585635 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 22:35:19.358940 1585635 node_conditions.go:123] node cpu capacity is 2
	I0731 22:35:19.358952 1585635 node_conditions.go:105] duration metric: took 3.32373ms to run NodePressure ...
	I0731 22:35:19.358985 1585635 start.go:241] waiting for startup goroutines ...
	I0731 22:35:19.359000 1585635 start.go:246] waiting for cluster config update ...
	I0731 22:35:19.359016 1585635 start.go:255] writing updated cluster config ...
	I0731 22:35:19.359331 1585635 ssh_runner.go:195] Run: rm -f paused
	I0731 22:35:19.710314 1585635 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 22:35:19.712499 1585635 out.go:177] * Done! kubectl is now configured to use "addons-849486" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.400794619Z" level=info msg="Removing container: 26627e03dfcb81309c04f617e55aeabd286f089140df375b3397f41cbfaf7749" id=2c932a8f-4f88-4168-b8a5-8fccb9644645 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.417725523Z" level=info msg="Removed container 26627e03dfcb81309c04f617e55aeabd286f089140df375b3397f41cbfaf7749: ingress-nginx/ingress-nginx-admission-patch-59tg2/patch" id=2c932a8f-4f88-4168-b8a5-8fccb9644645 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.419059180Z" level=info msg="Removing container: cea6d2938f44bd10f65639ae3e6035533be0494d3fffeab7656d442922362c31" id=181c1392-3b06-4f87-be2b-eece4940fddf name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.436432224Z" level=info msg="Removed container cea6d2938f44bd10f65639ae3e6035533be0494d3fffeab7656d442922362c31: ingress-nginx/ingress-nginx-admission-create-52v9j/create" id=181c1392-3b06-4f87-be2b-eece4940fddf name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.437812592Z" level=info msg="Stopping pod sandbox: ee25314da98e1cb997a33dac2f2f89b6db58cc7fd4f17068d8d7a74a857deb9a" id=8437a952-b26b-4d9d-8b6f-412739078b8e name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.437850664Z" level=info msg="Stopped pod sandbox (already stopped): ee25314da98e1cb997a33dac2f2f89b6db58cc7fd4f17068d8d7a74a857deb9a" id=8437a952-b26b-4d9d-8b6f-412739078b8e name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.438218236Z" level=info msg="Removing pod sandbox: ee25314da98e1cb997a33dac2f2f89b6db58cc7fd4f17068d8d7a74a857deb9a" id=c6b5cd1b-b8ad-497f-84f5-35e6a3ff98dd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.446452628Z" level=info msg="Removed pod sandbox: ee25314da98e1cb997a33dac2f2f89b6db58cc7fd4f17068d8d7a74a857deb9a" id=c6b5cd1b-b8ad-497f-84f5-35e6a3ff98dd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.446901956Z" level=info msg="Stopping pod sandbox: e4e6f8867afe4a7b8f5c9c1531286e2baa2e77a4a5a6d685b12fa7028248547f" id=13fc581f-b587-4ffa-81de-09af73877e33 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.447007432Z" level=info msg="Stopped pod sandbox (already stopped): e4e6f8867afe4a7b8f5c9c1531286e2baa2e77a4a5a6d685b12fa7028248547f" id=13fc581f-b587-4ffa-81de-09af73877e33 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.447364280Z" level=info msg="Removing pod sandbox: e4e6f8867afe4a7b8f5c9c1531286e2baa2e77a4a5a6d685b12fa7028248547f" id=559e39cc-ecd7-4aad-9d26-9aa22fbceda8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.455608929Z" level=info msg="Removed pod sandbox: e4e6f8867afe4a7b8f5c9c1531286e2baa2e77a4a5a6d685b12fa7028248547f" id=559e39cc-ecd7-4aad-9d26-9aa22fbceda8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.456072435Z" level=info msg="Stopping pod sandbox: ddbc3d0d76e2da1f1c41737246db7334251f0b4b592638251dffacffa5fd0634" id=75c0b90b-a73e-4b08-bfdc-e3aa9231f96e name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.456106815Z" level=info msg="Stopped pod sandbox (already stopped): ddbc3d0d76e2da1f1c41737246db7334251f0b4b592638251dffacffa5fd0634" id=75c0b90b-a73e-4b08-bfdc-e3aa9231f96e name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.456381768Z" level=info msg="Removing pod sandbox: ddbc3d0d76e2da1f1c41737246db7334251f0b4b592638251dffacffa5fd0634" id=3a88f519-91d7-4d6c-9da0-6a718d707d4a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.464625251Z" level=info msg="Removed pod sandbox: ddbc3d0d76e2da1f1c41737246db7334251f0b4b592638251dffacffa5fd0634" id=3a88f519-91d7-4d6c-9da0-6a718d707d4a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.465078944Z" level=info msg="Stopping pod sandbox: 4b1cb46cac816513ae64c5f04a32c2b6aa688e8cf81455aa4554699fa843c4a9" id=2d248984-bb41-4091-8585-d7fe446dc6b8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.465360690Z" level=info msg="Stopped pod sandbox (already stopped): 4b1cb46cac816513ae64c5f04a32c2b6aa688e8cf81455aa4554699fa843c4a9" id=2d248984-bb41-4091-8585-d7fe446dc6b8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.465668570Z" level=info msg="Removing pod sandbox: 4b1cb46cac816513ae64c5f04a32c2b6aa688e8cf81455aa4554699fa843c4a9" id=986286e9-d122-48d7-9f4c-66b5b3be928f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 31 22:40:21 addons-849486 crio[970]: time="2024-07-31 22:40:21.473941002Z" level=info msg="Removed pod sandbox: 4b1cb46cac816513ae64c5f04a32c2b6aa688e8cf81455aa4554699fa843c4a9" id=986286e9-d122-48d7-9f4c-66b5b3be928f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 31 22:42:54 addons-849486 crio[970]: time="2024-07-31 22:42:54.591268536Z" level=info msg="Stopping container: fb5857a681582508d5430d00266dbefe80c8212f52bade8be5e709a9dada08ec (timeout: 30s)" id=4c24e243-a768-41cf-9c9b-1697a7fcb923 name=/runtime.v1.RuntimeService/StopContainer
	Jul 31 22:42:55 addons-849486 crio[970]: time="2024-07-31 22:42:55.766546918Z" level=info msg="Stopped container fb5857a681582508d5430d00266dbefe80c8212f52bade8be5e709a9dada08ec: kube-system/metrics-server-c59844bb4-vlxmw/metrics-server" id=4c24e243-a768-41cf-9c9b-1697a7fcb923 name=/runtime.v1.RuntimeService/StopContainer
	Jul 31 22:42:55 addons-849486 crio[970]: time="2024-07-31 22:42:55.767803176Z" level=info msg="Stopping pod sandbox: aebc18b063c4943c49eeb29843d8867b42ae5ac9abb0175ba92b00eb713d7ff1" id=6b86f7ae-cc21-42e6-94d6-53e0a852d877 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 22:42:55 addons-849486 crio[970]: time="2024-07-31 22:42:55.768047245Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-vlxmw Namespace:kube-system ID:aebc18b063c4943c49eeb29843d8867b42ae5ac9abb0175ba92b00eb713d7ff1 UID:3c4a50ec-9a60-43e3-9e0c-a91793afab2d NetNS:/var/run/netns/f6f1a7db-61b0-4690-949c-aee01fadfe4b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 31 22:42:55 addons-849486 crio[970]: time="2024-07-31 22:42:55.768200549Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-vlxmw from CNI network \"kindnet\" (type=ptp)"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d73abe518e12c       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   7703dfdb3d8b8       hello-world-app-6778b5fc9f-fgj4r
	86fd2c8828f5f       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   d205817c76515       nginx
	aa4fa202f80fe       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     7 minutes ago       Running             busybox                   0                   d3199cba3f129       busybox
	fb5857a681582       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   aebc18b063c49       metrics-server-c59844bb4-vlxmw
	51ebc2ba4de88       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        9 minutes ago       Running             storage-provisioner       0                   f79ff61a5deeb       storage-provisioner
	033ddb4c73fa6       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        9 minutes ago       Running             coredns                   0                   a2d158546536e       coredns-7db6d8ff4d-qv2pm
	2fafcbc5f6d0b       docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a                      10 minutes ago      Running             kindnet-cni               0                   bca3c90519a6e       kindnet-v5dmr
	6cc491c729a8c       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                        10 minutes ago      Running             kube-proxy                0                   242914e32703a       kube-proxy-mxw62
	43e08f3fcd840       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                        10 minutes ago      Running             kube-scheduler            0                   ea8f758c53cee       kube-scheduler-addons-849486
	5fd4a5605ac9a       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        10 minutes ago      Running             etcd                      0                   cdf48c9282148       etcd-addons-849486
	f4494142a4f5f       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                        10 minutes ago      Running             kube-controller-manager   0                   c26a88794ecfb       kube-controller-manager-addons-849486
	8c713658baa17       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                        10 minutes ago      Running             kube-apiserver            0                   4bd849428ece1       kube-apiserver-addons-849486
	
	
	==> coredns [033ddb4c73fa66a426aa1758931479807c7b3e019156b72831cbde87435a7a7b] <==
	[INFO] 10.244.0.17:53319 - 749 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002483076s
	[INFO] 10.244.0.17:44981 - 50249 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00064643s
	[INFO] 10.244.0.17:44981 - 85 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000843271s
	[INFO] 10.244.0.17:39094 - 5482 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149776s
	[INFO] 10.244.0.17:39094 - 27246 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191663s
	[INFO] 10.244.0.17:60801 - 58112 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051536s
	[INFO] 10.244.0.17:60801 - 14 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000071048s
	[INFO] 10.244.0.17:53413 - 20028 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084036s
	[INFO] 10.244.0.17:53413 - 38462 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000120189s
	[INFO] 10.244.0.17:51877 - 45460 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001520635s
	[INFO] 10.244.0.17:51877 - 28313 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001797146s
	[INFO] 10.244.0.17:50534 - 20698 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000076545s
	[INFO] 10.244.0.17:50534 - 4568 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000062105s
	[INFO] 10.244.0.20:50649 - 4546 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188299s
	[INFO] 10.244.0.20:50431 - 2796 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000070802s
	[INFO] 10.244.0.20:43809 - 37926 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127269s
	[INFO] 10.244.0.20:58929 - 21717 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081059s
	[INFO] 10.244.0.20:58608 - 48966 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015652s
	[INFO] 10.244.0.20:45714 - 38716 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000219462s
	[INFO] 10.244.0.20:57979 - 21099 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003101594s
	[INFO] 10.244.0.20:44653 - 53362 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003305515s
	[INFO] 10.244.0.20:44185 - 36571 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000921703s
	[INFO] 10.244.0.20:51034 - 62993 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.00088661s
	[INFO] 10.244.0.22:35898 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000197521s
	[INFO] 10.244.0.22:38799 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110457s
	
	
	==> describe nodes <==
	Name:               addons-849486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-849486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=addons-849486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T22_32_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-849486
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:32:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-849486
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:42:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:39:58 +0000   Wed, 31 Jul 2024 22:32:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:39:58 +0000   Wed, 31 Jul 2024 22:32:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:39:58 +0000   Wed, 31 Jul 2024 22:32:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:39:58 +0000   Wed, 31 Jul 2024 22:33:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-849486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3a2ae86de824034b2b2f153488a584b
	  System UUID:                ebbfb0f4-45dd-4e41-b862-ce1e4dc4dac6
	  Boot ID:                    2daee006-f42a-4cec-a0b1-7137cc9806d6
	  Kernel Version:             5.15.0-1066-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  default                     hello-world-app-6778b5fc9f-fgj4r         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 coredns-7db6d8ff4d-qv2pm                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 etcd-addons-849486                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-v5dmr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-addons-849486             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-addons-849486    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-mxw62                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-addons-849486             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 10m    kube-proxy       
	  Normal  Starting                 10m    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node addons-849486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m    kubelet          Node addons-849486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m    kubelet          Node addons-849486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m    node-controller  Node addons-849486 event: Registered Node addons-849486 in Controller
	  Normal  NodeReady                9m37s  kubelet          Node addons-849486 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001117] FS-Cache: O-key=[8] 'f8405c0100000000'
	[  +0.000704] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=000000008e8348f1
	[  +0.001059] FS-Cache: N-key=[8] 'f8405c0100000000'
	[  +0.002954] FS-Cache: Duplicate cookie detected
	[  +0.000668] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000993] FS-Cache: O-cookie d=00000000f00cf1f7{9p.inode} n=00000000260dfb27
	[  +0.001074] FS-Cache: O-key=[8] 'f8405c0100000000'
	[  +0.000691] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000914] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=0000000074d3f313
	[  +0.001052] FS-Cache: N-key=[8] 'f8405c0100000000'
	[  +2.888027] FS-Cache: Duplicate cookie detected
	[  +0.004601] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000988] FS-Cache: O-cookie d=00000000f00cf1f7{9p.inode} n=00000000f2a098bb
	[  +0.001052] FS-Cache: O-key=[8] 'f7405c0100000000'
	[  +0.000703] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=000000008e8348f1
	[  +0.001024] FS-Cache: N-key=[8] 'f7405c0100000000'
	[  +0.283755] FS-Cache: Duplicate cookie detected
	[  +0.000701] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000957] FS-Cache: O-cookie d=00000000f00cf1f7{9p.inode} n=00000000541c833f
	[  +0.001032] FS-Cache: O-key=[8] 'fd405c0100000000'
	[  +0.000717] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000931] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=000000006dff21d2
	[  +0.001042] FS-Cache: N-key=[8] 'fd405c0100000000'
	
	
	==> etcd [5fd4a5605ac9aa4471d295bc361055fc633a6e9b88026a718179f2403b47f699] <==
	{"level":"info","ts":"2024-07-31T22:32:14.037388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-31T22:32:14.037426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-31T22:32:14.03746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-31T22:32:14.041282Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-849486 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T22:32:14.041492Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T22:32:14.041833Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T22:32:14.045123Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T22:32:14.046722Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T22:32:14.046808Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T22:32:14.046854Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T22:32:14.046818Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T22:32:14.046993Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T22:32:14.047053Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T22:32:14.04858Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-31T22:32:35.279674Z","caller":"traceutil/trace.go:171","msg":"trace[1069801401] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"159.772845ms","start":"2024-07-31T22:32:35.119882Z","end":"2024-07-31T22:32:35.279655Z","steps":["trace[1069801401] 'process raft request'  (duration: 157.074885ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:32:37.452807Z","caller":"traceutil/trace.go:171","msg":"trace[102708275] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"163.332948ms","start":"2024-07-31T22:32:37.289459Z","end":"2024-07-31T22:32:37.452792Z","steps":["trace[102708275] 'process raft request'  (duration: 162.95178ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:32:37.452989Z","caller":"traceutil/trace.go:171","msg":"trace[1929343642] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"163.471483ms","start":"2024-07-31T22:32:37.28951Z","end":"2024-07-31T22:32:37.452981Z","steps":["trace[1929343642] 'process raft request'  (duration: 162.991443ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:32:37.453094Z","caller":"traceutil/trace.go:171","msg":"trace[186713461] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"163.539897ms","start":"2024-07-31T22:32:37.289543Z","end":"2024-07-31T22:32:37.453082Z","steps":["trace[186713461] 'process raft request'  (duration: 162.992133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T22:32:37.883779Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.92596ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T22:32:37.883862Z","caller":"traceutil/trace.go:171","msg":"trace[226505398] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:389; }","duration":"105.018464ms","start":"2024-07-31T22:32:37.778828Z","end":"2024-07-31T22:32:37.883847Z","steps":["trace[226505398] 'agreement among raft nodes before linearized reading'  (duration: 96.672828ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T22:32:37.883977Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.286468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T22:32:37.883995Z","caller":"traceutil/trace.go:171","msg":"trace[627011078] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:0; response_revision:389; }","duration":"105.306923ms","start":"2024-07-31T22:32:37.778683Z","end":"2024-07-31T22:32:37.88399Z","steps":["trace[627011078] 'agreement among raft nodes before linearized reading'  (duration: 96.832894ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:42:15.31983Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1966}
	{"level":"info","ts":"2024-07-31T22:42:15.355555Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1966,"took":"35.054181ms","hash":4012463546,"current-db-size-bytes":8138752,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":5124096,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-07-31T22:42:15.355603Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4012463546,"revision":1966,"compact-revision":-1}
	
	
	==> kernel <==
	 22:42:56 up  6:25,  0 users,  load average: 0.34, 0.92, 1.93
	Linux addons-849486 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [2fafcbc5f6d0b18752a2c02561c82f941b2214a21e1668ef3dad41db57404757] <==
	I0731 22:40:49.841322       1 main.go:299] handling current node
	I0731 22:40:59.838399       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:40:59.838512       1 main.go:299] handling current node
	I0731 22:41:09.842786       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:41:09.842824       1 main.go:299] handling current node
	I0731 22:41:19.838101       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:41:19.838141       1 main.go:299] handling current node
	I0731 22:41:29.845458       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:41:29.845491       1 main.go:299] handling current node
	I0731 22:41:39.838923       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:41:39.838957       1 main.go:299] handling current node
	I0731 22:41:49.838291       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:41:49.838328       1 main.go:299] handling current node
	I0731 22:41:59.838330       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:41:59.838366       1 main.go:299] handling current node
	I0731 22:42:09.847243       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:42:09.847281       1 main.go:299] handling current node
	I0731 22:42:19.838821       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:42:19.838852       1 main.go:299] handling current node
	I0731 22:42:29.841774       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:42:29.841809       1 main.go:299] handling current node
	I0731 22:42:39.838274       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:42:39.838406       1 main.go:299] handling current node
	I0731 22:42:49.838454       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0731 22:42:49.838565       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8c713658baa17ac1485e11b37c5cf5627bd0c36cae35a0f6877126239017b0dc] <==
	E0731 22:35:08.135060       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.27.160:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.27.160:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.27.160:443: connect: connection refused
	I0731 22:35:08.201422       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0731 22:35:29.525066       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46776: use of closed network connection
	E0731 22:35:29.761361       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46804: use of closed network connection
	I0731 22:36:03.013643       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0731 22:36:38.868885       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 22:36:38.868933       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 22:36:38.894153       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 22:36:38.894208       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 22:36:38.934940       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 22:36:38.935059       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 22:36:38.988118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 22:36:38.988304       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0731 22:36:39.000527       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0731 22:36:39.010734       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 22:36:39.010836       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 22:36:39.935554       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 22:36:40.013778       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 22:36:40.038117       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0731 22:36:46.619691       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.213.5"}
	I0731 22:37:12.861173       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0731 22:37:13.910284       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0731 22:37:18.425866       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0731 22:37:18.720084       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.224.86"}
	I0731 22:39:38.570384       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.53.179"}
	
	
	==> kube-controller-manager [f4494142a4f5fd9fc0f28803d884401bab7fdeeed747148419f332d60bcd1cdf] <==
	W0731 22:40:35.678203       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:40:35.678236       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:40:56.907413       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:40:56.907446       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:41:05.900510       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:41:05.900546       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:41:18.777190       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:41:18.777230       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:41:20.649474       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:41:20.649512       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:41:47.683753       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:41:47.683791       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:41:55.301268       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:41:55.301396       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:42:01.966011       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:42:01.966049       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:42:08.810489       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:42:08.810525       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:42:28.292031       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:42:28.292066       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:42:43.649630       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:42:43.649668       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 22:42:48.390412       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 22:42:48.390451       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 22:42:54.565722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="6.687µs"
	
	
	==> kube-proxy [6cc491c729a8cc7848be882db1e3936e5ef1ea26a9d0e8a2e4265c0c8bb1b5cc] <==
	I0731 22:32:38.756380       1 server_linux.go:69] "Using iptables proxy"
	I0731 22:32:39.222800       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0731 22:32:39.901160       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0731 22:32:39.901205       1 server_linux.go:165] "Using iptables Proxier"
	I0731 22:32:39.929284       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0731 22:32:39.948335       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0731 22:32:39.948710       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 22:32:39.948998       1 server.go:872] "Version info" version="v1.30.3"
	I0731 22:32:39.949261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:32:39.950190       1 config.go:192] "Starting service config controller"
	I0731 22:32:39.950265       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 22:32:39.950335       1 config.go:101] "Starting endpoint slice config controller"
	I0731 22:32:39.950372       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 22:32:39.951159       1 config.go:319] "Starting node config controller"
	I0731 22:32:39.951222       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 22:32:40.057789       1 shared_informer.go:320] Caches are synced for node config
	I0731 22:32:40.057831       1 shared_informer.go:320] Caches are synced for service config
	I0731 22:32:40.057861       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [43e08f3fcd840ce47f21bf61449ffcc9f7334b792f9f65a63c314284e9bb703d] <==
	W0731 22:32:17.620169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 22:32:17.620187       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 22:32:17.620253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 22:32:17.620269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 22:32:17.620891       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 22:32:17.620916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 22:32:18.498853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 22:32:18.499001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 22:32:18.536885       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 22:32:18.537027       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 22:32:18.595861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:32:18.595982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 22:32:18.606358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 22:32:18.606417       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 22:32:18.614429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 22:32:18.614593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 22:32:18.619012       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 22:32:18.619142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 22:32:18.628895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 22:32:18.628993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 22:32:18.716770       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 22:32:18.716806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 22:32:18.722342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 22:32:18.722471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0731 22:32:19.112195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.518245    1542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b824ba-9aaa-4a04-839f-fc70bdcb2776-kube-api-access-hst2r" (OuterVolumeSpecName: "kube-api-access-hst2r") pod "97b824ba-9aaa-4a04-839f-fc70bdcb2776" (UID: "97b824ba-9aaa-4a04-839f-fc70bdcb2776"). InnerVolumeSpecName "kube-api-access-hst2r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.572050    1542 scope.go:117] "RemoveContainer" containerID="9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b"
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.587817    1542 scope.go:117] "RemoveContainer" containerID="9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b"
	Jul 31 22:39:44 addons-849486 kubelet[1542]: E0731 22:39:44.588195    1542 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b\": container with ID starting with 9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b not found: ID does not exist" containerID="9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b"
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.588233    1542 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b"} err="failed to get container status \"9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b\": rpc error: code = NotFound desc = could not find container \"9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b\": container with ID starting with 9e133944c30b944cfb2ccbfe4aedc13b6b673800099c562d1013a1d0f2026d4b not found: ID does not exist"
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.613567    1542 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hst2r\" (UniqueName: \"kubernetes.io/projected/97b824ba-9aaa-4a04-839f-fc70bdcb2776-kube-api-access-hst2r\") on node \"addons-849486\" DevicePath \"\""
	Jul 31 22:39:44 addons-849486 kubelet[1542]: I0731 22:39:44.613600    1542 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97b824ba-9aaa-4a04-839f-fc70bdcb2776-webhook-cert\") on node \"addons-849486\" DevicePath \"\""
	Jul 31 22:39:46 addons-849486 kubelet[1542]: I0731 22:39:46.093182    1542 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b824ba-9aaa-4a04-839f-fc70bdcb2776" path="/var/lib/kubelet/pods/97b824ba-9aaa-4a04-839f-fc70bdcb2776/volumes"
	Jul 31 22:40:03 addons-849486 kubelet[1542]: I0731 22:40:03.092246    1542 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 31 22:40:21 addons-849486 kubelet[1542]: I0731 22:40:21.399283    1542 scope.go:117] "RemoveContainer" containerID="26627e03dfcb81309c04f617e55aeabd286f089140df375b3397f41cbfaf7749"
	Jul 31 22:40:21 addons-849486 kubelet[1542]: I0731 22:40:21.418039    1542 scope.go:117] "RemoveContainer" containerID="cea6d2938f44bd10f65639ae3e6035533be0494d3fffeab7656d442922362c31"
	Jul 31 22:41:14 addons-849486 kubelet[1542]: I0731 22:41:14.091671    1542 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 31 22:42:20 addons-849486 kubelet[1542]: E0731 22:42:20.151967    1542 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf, memory: /docker/110805b36784ba9132cb5c8aa53b735d1e872233936c0901abe28f3f75a710bf/system.slice/kubelet.service"
	Jul 31 22:42:43 addons-849486 kubelet[1542]: I0731 22:42:43.091655    1542 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 31 22:42:55 addons-849486 kubelet[1542]: I0731 22:42:55.901909    1542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j84gn\" (UniqueName: \"kubernetes.io/projected/3c4a50ec-9a60-43e3-9e0c-a91793afab2d-kube-api-access-j84gn\") pod \"3c4a50ec-9a60-43e3-9e0c-a91793afab2d\" (UID: \"3c4a50ec-9a60-43e3-9e0c-a91793afab2d\") "
	Jul 31 22:42:55 addons-849486 kubelet[1542]: I0731 22:42:55.901967    1542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3c4a50ec-9a60-43e3-9e0c-a91793afab2d-tmp-dir\") pod \"3c4a50ec-9a60-43e3-9e0c-a91793afab2d\" (UID: \"3c4a50ec-9a60-43e3-9e0c-a91793afab2d\") "
	Jul 31 22:42:55 addons-849486 kubelet[1542]: I0731 22:42:55.902294    1542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c4a50ec-9a60-43e3-9e0c-a91793afab2d-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3c4a50ec-9a60-43e3-9e0c-a91793afab2d" (UID: "3c4a50ec-9a60-43e3-9e0c-a91793afab2d"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 31 22:42:55 addons-849486 kubelet[1542]: I0731 22:42:55.910797    1542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c4a50ec-9a60-43e3-9e0c-a91793afab2d-kube-api-access-j84gn" (OuterVolumeSpecName: "kube-api-access-j84gn") pod "3c4a50ec-9a60-43e3-9e0c-a91793afab2d" (UID: "3c4a50ec-9a60-43e3-9e0c-a91793afab2d"). InnerVolumeSpecName "kube-api-access-j84gn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 22:42:55 addons-849486 kubelet[1542]: I0731 22:42:55.949228    1542 scope.go:117] "RemoveContainer" containerID="fb5857a681582508d5430d00266dbefe80c8212f52bade8be5e709a9dada08ec"
	Jul 31 22:42:55 addons-849486 kubelet[1542]: I0731 22:42:55.979219    1542 scope.go:117] "RemoveContainer" containerID="fb5857a681582508d5430d00266dbefe80c8212f52bade8be5e709a9dada08ec"
	Jul 31 22:42:55 addons-849486 kubelet[1542]: E0731 22:42:55.981583    1542 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb5857a681582508d5430d00266dbefe80c8212f52bade8be5e709a9dada08ec\": container with ID starting with fb5857a681582508d5430d00266dbefe80c8212f52bade8be5e709a9dada08ec not found: ID does not exist" containerID="fb5857a681582508d5430d00266dbefe80c8212f52bade8be5e709a9dada08ec"
	Jul 31 22:42:55 addons-849486 kubelet[1542]: I0731 22:42:55.981625    1542 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb5857a681582508d5430d00266dbefe80c8212f52bade8be5e709a9dada08ec"} err="failed to get container status \"fb5857a681582508d5430d00266dbefe80c8212f52bade8be5e709a9dada08ec\": rpc error: code = NotFound desc = could not find container \"fb5857a681582508d5430d00266dbefe80c8212f52bade8be5e709a9dada08ec\": container with ID starting with fb5857a681582508d5430d00266dbefe80c8212f52bade8be5e709a9dada08ec not found: ID does not exist"
	Jul 31 22:42:56 addons-849486 kubelet[1542]: I0731 22:42:56.002907    1542 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j84gn\" (UniqueName: \"kubernetes.io/projected/3c4a50ec-9a60-43e3-9e0c-a91793afab2d-kube-api-access-j84gn\") on node \"addons-849486\" DevicePath \"\""
	Jul 31 22:42:56 addons-849486 kubelet[1542]: I0731 22:42:56.002960    1542 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3c4a50ec-9a60-43e3-9e0c-a91793afab2d-tmp-dir\") on node \"addons-849486\" DevicePath \"\""
	Jul 31 22:42:56 addons-849486 kubelet[1542]: I0731 22:42:56.093627    1542 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c4a50ec-9a60-43e3-9e0c-a91793afab2d" path="/var/lib/kubelet/pods/3c4a50ec-9a60-43e3-9e0c-a91793afab2d/volumes"
	
	
	==> storage-provisioner [51ebc2ba4de88879295ed4972dd5fd4dbfc779bace166a2294f70f146b15149d] <==
	I0731 22:33:20.916280       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 22:33:20.931522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 22:33:20.931771       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 22:33:20.948102       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 22:33:20.948476       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-849486_902c7b87-d25f-41db-97be-e918f64904d7!
	I0731 22:33:20.949474       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bac6edc7-406f-4e5a-bd10-08a1792a5d05", APIVersion:"v1", ResourceVersion:"910", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-849486_902c7b87-d25f-41db-97be-e918f64904d7 became leader
	I0731 22:33:21.049221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-849486_902c7b87-d25f-41db-97be-e918f64904d7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-849486 -n addons-849486
helpers_test.go:261: (dbg) Run:  kubectl --context addons-849486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (353.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (374.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-130660 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0731 23:34:55.539814 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:35:03.377220 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 23:35:04.779607 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
E0731 23:35:08.592261 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:08.597499 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:08.607706 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:08.627978 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:08.668868 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:08.749161 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:08.909503 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:09.230461 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:09.871449 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:11.152612 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:13.713635 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:18.833869 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:20.329030 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 23:35:29.074428 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:36.500633 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:35:40.025617 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:35:49.555457 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:35:55.295897 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:35:55.301149 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:35:55.311387 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:35:55.331640 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:35:55.371876 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:35:55.452129 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:35:55.612489 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:35:55.933033 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:35:56.573371 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:35:57.854290 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:36:00.415309 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:36:04.231852 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
E0731 23:36:05.536080 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:36:15.776330 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:36:30.515631 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:36:31.914987 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
E0731 23:36:36.256723 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:36:37.672996 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
E0731 23:36:58.420835 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:37:10.989350 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:10.994656 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:11.005310 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:11.025623 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:11.065939 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:11.146431 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:11.306678 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:11.627256 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:12.268068 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:13.548262 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:16.108502 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:17.216964 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:37:20.935725 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
E0731 23:37:21.229435 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:31.470309 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:48.620228 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
E0731 23:37:51.951185 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:37:52.435858 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:37:56.183012 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:38:23.866418 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:38:32.912026 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:38:39.137724 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-130660 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 102 (6m11.511466869s)

                                                
                                                
-- stdout --
	* [old-k8s-version-130660] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-130660" primary control-plane node in "old-k8s-version-130660" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Restarting existing docker container for "old-k8s-version-130660" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-130660 addons enable metrics-server
	
	* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 23:34:42.571842 1829252 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:34:42.572021 1829252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:34:42.572028 1829252 out.go:304] Setting ErrFile to fd 2...
	I0731 23:34:42.572034 1829252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:34:42.572281 1829252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 23:34:42.572626 1829252 out.go:298] Setting JSON to false
	I0731 23:34:42.573723 1829252 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26221,"bootTime":1722442662,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 23:34:42.573790 1829252 start.go:139] virtualization:  
	I0731 23:34:42.579705 1829252 out.go:177] * [old-k8s-version-130660] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0731 23:34:42.582193 1829252 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 23:34:42.582266 1829252 notify.go:220] Checking for updates...
	I0731 23:34:42.587839 1829252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:34:42.589981 1829252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 23:34:42.592131 1829252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	I0731 23:34:42.594418 1829252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 23:34:42.596267 1829252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:34:42.598828 1829252 config.go:182] Loaded profile config "old-k8s-version-130660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 23:34:42.601707 1829252 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 23:34:42.603940 1829252 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:34:42.672802 1829252 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0731 23:34:42.672919 1829252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 23:34:42.765387 1829252 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-31 23:34:42.752603406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 23:34:42.765509 1829252 docker.go:307] overlay module found
	I0731 23:34:42.769473 1829252 out.go:177] * Using the docker driver based on existing profile
	I0731 23:34:42.771631 1829252 start.go:297] selected driver: docker
	I0731 23:34:42.771649 1829252 start.go:901] validating driver "docker" against &{Name:old-k8s-version-130660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-130660 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:34:42.771771 1829252 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:34:42.772380 1829252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 23:34:42.862826 1829252 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-31 23:34:42.852215928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 23:34:42.863158 1829252 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:34:42.863179 1829252 cni.go:84] Creating CNI manager for ""
	I0731 23:34:42.863187 1829252 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 23:34:42.863237 1829252 start.go:340] cluster config:
	{Name:old-k8s-version-130660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-130660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:34:42.866811 1829252 out.go:177] * Starting "old-k8s-version-130660" primary control-plane node in "old-k8s-version-130660" cluster
	I0731 23:34:42.868555 1829252 cache.go:121] Beginning downloading kic base image for docker with crio
	I0731 23:34:42.870453 1829252 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0731 23:34:42.873198 1829252 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 23:34:42.873258 1829252 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0731 23:34:42.873268 1829252 cache.go:56] Caching tarball of preloaded images
	I0731 23:34:42.873347 1829252 preload.go:172] Found /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0731 23:34:42.873356 1829252 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 23:34:42.873476 1829252 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/config.json ...
	I0731 23:34:42.873678 1829252 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	W0731 23:34:42.899696 1829252 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0731 23:34:42.899716 1829252 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 23:34:42.899790 1829252 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 23:34:42.899822 1829252 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 23:34:42.899828 1829252 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 23:34:42.899837 1829252 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 23:34:42.899848 1829252 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0731 23:34:43.030338 1829252 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0731 23:34:43.030378 1829252 cache.go:194] Successfully downloaded all kic artifacts
	I0731 23:34:43.030408 1829252 start.go:360] acquireMachinesLock for old-k8s-version-130660: {Name:mkc9755a68757cdabbee4e5fe1d18f5125691b13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:34:43.030473 1829252 start.go:364] duration metric: took 41.895µs to acquireMachinesLock for "old-k8s-version-130660"
	I0731 23:34:43.030500 1829252 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:34:43.030509 1829252 fix.go:54] fixHost starting: 
	I0731 23:34:43.030795 1829252 cli_runner.go:164] Run: docker container inspect old-k8s-version-130660 --format={{.State.Status}}
	I0731 23:34:43.056095 1829252 fix.go:112] recreateIfNeeded on old-k8s-version-130660: state=Stopped err=<nil>
	W0731 23:34:43.056215 1829252 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:34:43.060563 1829252 out.go:177] * Restarting existing docker container for "old-k8s-version-130660" ...
	I0731 23:34:43.064776 1829252 cli_runner.go:164] Run: docker start old-k8s-version-130660
	I0731 23:34:43.416704 1829252 cli_runner.go:164] Run: docker container inspect old-k8s-version-130660 --format={{.State.Status}}
	I0731 23:34:43.441233 1829252 kic.go:430] container "old-k8s-version-130660" state is running.
	I0731 23:34:43.441625 1829252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-130660
	I0731 23:34:43.478467 1829252 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/config.json ...
	I0731 23:34:43.478679 1829252 machine.go:94] provisionDockerMachine start ...
	I0731 23:34:43.478734 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:43.508418 1829252 main.go:141] libmachine: Using SSH client type: native
	I0731 23:34:43.508675 1829252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34971 <nil> <nil>}
	I0731 23:34:43.508684 1829252 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:34:43.509307 1829252 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54300->127.0.0.1:34971: read: connection reset by peer
	I0731 23:34:46.645604 1829252 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-130660
	
	I0731 23:34:46.645631 1829252 ubuntu.go:169] provisioning hostname "old-k8s-version-130660"
	I0731 23:34:46.645716 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:46.663860 1829252 main.go:141] libmachine: Using SSH client type: native
	I0731 23:34:46.664119 1829252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34971 <nil> <nil>}
	I0731 23:34:46.664135 1829252 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-130660 && echo "old-k8s-version-130660" | sudo tee /etc/hostname
	I0731 23:34:46.809430 1829252 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-130660
	
	I0731 23:34:46.809567 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:46.827603 1829252 main.go:141] libmachine: Using SSH client type: native
	I0731 23:34:46.827858 1829252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34971 <nil> <nil>}
	I0731 23:34:46.827882 1829252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-130660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-130660/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-130660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:34:46.960981 1829252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:34:46.961011 1829252 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1579223/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1579223/.minikube}
	I0731 23:34:46.961039 1829252 ubuntu.go:177] setting up certificates
	I0731 23:34:46.961049 1829252 provision.go:84] configureAuth start
	I0731 23:34:46.961135 1829252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-130660
	I0731 23:34:46.977494 1829252 provision.go:143] copyHostCerts
	I0731 23:34:46.977571 1829252 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1579223/.minikube/cert.pem, removing ...
	I0731 23:34:46.977595 1829252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1579223/.minikube/cert.pem
	I0731 23:34:46.977678 1829252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/cert.pem (1123 bytes)
	I0731 23:34:46.977794 1829252 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1579223/.minikube/key.pem, removing ...
	I0731 23:34:46.977803 1829252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1579223/.minikube/key.pem
	I0731 23:34:46.977833 1829252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/key.pem (1679 bytes)
	I0731 23:34:46.977901 1829252 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.pem, removing ...
	I0731 23:34:46.977911 1829252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.pem
	I0731 23:34:46.977935 1829252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.pem (1082 bytes)
	I0731 23:34:46.978004 1829252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-130660 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-130660]
	I0731 23:34:47.628190 1829252 provision.go:177] copyRemoteCerts
	I0731 23:34:47.628262 1829252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:34:47.628313 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:47.647606 1829252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34971 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/old-k8s-version-130660/id_rsa Username:docker}
	I0731 23:34:47.744044 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 23:34:47.772817 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 23:34:47.798561 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 23:34:47.824300 1829252 provision.go:87] duration metric: took 863.23431ms to configureAuth
	I0731 23:34:47.824329 1829252 ubuntu.go:193] setting minikube options for container-runtime
	I0731 23:34:47.824517 1829252 config.go:182] Loaded profile config "old-k8s-version-130660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 23:34:47.824622 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:47.842182 1829252 main.go:141] libmachine: Using SSH client type: native
	I0731 23:34:47.842438 1829252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34971 <nil> <nil>}
	I0731 23:34:47.842461 1829252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 23:34:48.245409 1829252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 23:34:48.245434 1829252 machine.go:97] duration metric: took 4.766745179s to provisionDockerMachine
	I0731 23:34:48.245445 1829252 start.go:293] postStartSetup for "old-k8s-version-130660" (driver="docker")
	I0731 23:34:48.245457 1829252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:34:48.245521 1829252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:34:48.245570 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:48.270987 1829252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34971 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/old-k8s-version-130660/id_rsa Username:docker}
	I0731 23:34:48.374511 1829252 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:34:48.377761 1829252 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 23:34:48.377795 1829252 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 23:34:48.377806 1829252 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 23:34:48.377813 1829252 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0731 23:34:48.377823 1829252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1579223/.minikube/addons for local assets ...
	I0731 23:34:48.377882 1829252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1579223/.minikube/files for local assets ...
	I0731 23:34:48.377968 1829252 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1579223/.minikube/files/etc/ssl/certs/15846152.pem -> 15846152.pem in /etc/ssl/certs
	I0731 23:34:48.378071 1829252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:34:48.386660 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/files/etc/ssl/certs/15846152.pem --> /etc/ssl/certs/15846152.pem (1708 bytes)
	I0731 23:34:48.412948 1829252 start.go:296] duration metric: took 167.486544ms for postStartSetup
	I0731 23:34:48.413040 1829252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 23:34:48.413107 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:48.429303 1829252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34971 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/old-k8s-version-130660/id_rsa Username:docker}
	I0731 23:34:48.522938 1829252 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 23:34:48.530726 1829252 fix.go:56] duration metric: took 5.500208417s for fixHost
	I0731 23:34:48.530791 1829252 start.go:83] releasing machines lock for "old-k8s-version-130660", held for 5.500302915s
	I0731 23:34:48.530883 1829252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-130660
	I0731 23:34:48.549303 1829252 ssh_runner.go:195] Run: cat /version.json
	I0731 23:34:48.549375 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:48.549312 1829252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 23:34:48.549458 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:48.566790 1829252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34971 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/old-k8s-version-130660/id_rsa Username:docker}
	I0731 23:34:48.578834 1829252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34971 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/old-k8s-version-130660/id_rsa Username:docker}
	I0731 23:34:48.668787 1829252 ssh_runner.go:195] Run: systemctl --version
	I0731 23:34:48.806523 1829252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 23:34:48.951601 1829252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 23:34:48.959106 1829252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:34:48.970037 1829252 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 23:34:48.970120 1829252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:34:48.979676 1829252 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 23:34:48.979752 1829252 start.go:495] detecting cgroup driver to use...
	I0731 23:34:48.979815 1829252 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0731 23:34:48.979868 1829252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:34:48.993884 1829252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:34:49.007955 1829252 docker.go:217] disabling cri-docker service (if available) ...
	I0731 23:34:49.008063 1829252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 23:34:49.021838 1829252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 23:34:49.034664 1829252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 23:34:49.156344 1829252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 23:34:49.254566 1829252 docker.go:233] disabling docker service ...
	I0731 23:34:49.254663 1829252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 23:34:49.270457 1829252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 23:34:49.282378 1829252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 23:34:49.383088 1829252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 23:34:49.481713 1829252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 23:34:49.494372 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:34:49.516143 1829252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 23:34:49.516273 1829252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:34:49.533365 1829252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 23:34:49.533482 1829252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:34:49.544770 1829252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:34:49.555230 1829252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:34:49.565818 1829252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:34:49.575301 1829252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:34:49.584172 1829252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:34:49.592980 1829252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:34:49.679605 1829252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 23:34:49.854687 1829252 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 23:34:49.854767 1829252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 23:34:49.859156 1829252 start.go:563] Will wait 60s for crictl version
	I0731 23:34:49.859219 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:34:49.863047 1829252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:34:49.901119 1829252 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 23:34:49.901208 1829252 ssh_runner.go:195] Run: crio --version
	I0731 23:34:49.943141 1829252 ssh_runner.go:195] Run: crio --version
	I0731 23:34:49.992974 1829252 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0731 23:34:49.995115 1829252 cli_runner.go:164] Run: docker network inspect old-k8s-version-130660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 23:34:50.025302 1829252 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0731 23:34:50.029680 1829252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:34:50.043201 1829252 kubeadm.go:883] updating cluster {Name:old-k8s-version-130660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-130660 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 23:34:50.043323 1829252 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 23:34:50.043383 1829252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:34:50.098774 1829252 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 23:34:50.098803 1829252 crio.go:433] Images already preloaded, skipping extraction
	I0731 23:34:50.098865 1829252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:34:50.144958 1829252 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 23:34:50.144982 1829252 cache_images.go:84] Images are preloaded, skipping loading
	I0731 23:34:50.144994 1829252 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 crio true true} ...
	I0731 23:34:50.145141 1829252 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-130660 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-130660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:34:50.145234 1829252 ssh_runner.go:195] Run: crio config
	I0731 23:34:50.221393 1829252 cni.go:84] Creating CNI manager for ""
	I0731 23:34:50.221475 1829252 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 23:34:50.221502 1829252 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 23:34:50.221556 1829252 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-130660 NodeName:old-k8s-version-130660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 23:34:50.221746 1829252 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-130660"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 23:34:50.221859 1829252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 23:34:50.231031 1829252 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:34:50.231141 1829252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 23:34:50.240231 1829252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0731 23:34:50.258865 1829252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:34:50.277657 1829252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0731 23:34:50.297409 1829252 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0731 23:34:50.300938 1829252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:34:50.312070 1829252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:34:50.399413 1829252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:34:50.413937 1829252 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660 for IP: 192.168.76.2
	I0731 23:34:50.414004 1829252 certs.go:194] generating shared ca certs ...
	I0731 23:34:50.414034 1829252 certs.go:226] acquiring lock for ca certs: {Name:mk6ccdabf08b8b9bfa2ad4dfbceb108d85e42085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:34:50.414208 1829252 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key
	I0731 23:34:50.414287 1829252 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key
	I0731 23:34:50.414320 1829252 certs.go:256] generating profile certs ...
	I0731 23:34:50.414457 1829252 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.key
	I0731 23:34:50.414587 1829252 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/apiserver.key.699bafe4
	I0731 23:34:50.414674 1829252 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/proxy-client.key
	I0731 23:34:50.414817 1829252 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/1584615.pem (1338 bytes)
	W0731 23:34:50.414873 1829252 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/1584615_empty.pem, impossibly tiny 0 bytes
	I0731 23:34:50.414898 1829252 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 23:34:50.414953 1829252 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem (1082 bytes)
	I0731 23:34:50.415001 1829252 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem (1123 bytes)
	I0731 23:34:50.415054 1829252 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem (1679 bytes)
	I0731 23:34:50.415129 1829252 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/files/etc/ssl/certs/15846152.pem (1708 bytes)
	I0731 23:34:50.415823 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:34:50.464050 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 23:34:50.503553 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:34:50.542575 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 23:34:50.576585 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 23:34:50.602949 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 23:34:50.631574 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 23:34:50.657355 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 23:34:50.683051 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/1584615.pem --> /usr/share/ca-certificates/1584615.pem (1338 bytes)
	I0731 23:34:50.708462 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/files/etc/ssl/certs/15846152.pem --> /usr/share/ca-certificates/15846152.pem (1708 bytes)
	I0731 23:34:50.733889 1829252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:34:50.761009 1829252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 23:34:50.779770 1829252 ssh_runner.go:195] Run: openssl version
	I0731 23:34:50.785507 1829252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584615.pem && ln -fs /usr/share/ca-certificates/1584615.pem /etc/ssl/certs/1584615.pem"
	I0731 23:34:50.795458 1829252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584615.pem
	I0731 23:34:50.798968 1829252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:43 /usr/share/ca-certificates/1584615.pem
	I0731 23:34:50.799039 1829252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584615.pem
	I0731 23:34:50.806615 1829252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584615.pem /etc/ssl/certs/51391683.0"
	I0731 23:34:50.823621 1829252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15846152.pem && ln -fs /usr/share/ca-certificates/15846152.pem /etc/ssl/certs/15846152.pem"
	I0731 23:34:50.833226 1829252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15846152.pem
	I0731 23:34:50.838566 1829252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:43 /usr/share/ca-certificates/15846152.pem
	I0731 23:34:50.838635 1829252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15846152.pem
	I0731 23:34:50.846076 1829252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15846152.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:34:50.855502 1829252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:34:50.865365 1829252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:34:50.869175 1829252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:34:50.869246 1829252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:34:50.876735 1829252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:34:50.886063 1829252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:34:50.889867 1829252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 23:34:50.896834 1829252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 23:34:50.904427 1829252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 23:34:50.911444 1829252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 23:34:50.919058 1829252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 23:34:50.926709 1829252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 23:34:50.933902 1829252 kubeadm.go:392] StartCluster: {Name:old-k8s-version-130660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-130660 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:34:50.933999 1829252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 23:34:50.934106 1829252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 23:34:50.980758 1829252 cri.go:89] found id: ""
	I0731 23:34:50.980881 1829252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 23:34:50.989899 1829252 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 23:34:50.989951 1829252 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 23:34:50.990026 1829252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 23:34:50.998797 1829252 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 23:34:50.999433 1829252 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-130660" does not appear in /home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 23:34:50.999697 1829252 kubeconfig.go:62] /home/jenkins/minikube-integration/19360-1579223/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-130660" cluster setting kubeconfig missing "old-k8s-version-130660" context setting]
	I0731 23:34:51.000185 1829252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/kubeconfig: {Name:mkfef6e38d1ebcc45fcbbe766a2ae2945f7bd392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:34:51.002069 1829252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 23:34:51.013954 1829252 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0731 23:34:51.013991 1829252 kubeadm.go:597] duration metric: took 24.027374ms to restartPrimaryControlPlane
	I0731 23:34:51.014003 1829252 kubeadm.go:394] duration metric: took 80.110628ms to StartCluster
	I0731 23:34:51.014019 1829252 settings.go:142] acquiring lock: {Name:mk3c0c3b857f6d982767b7eb95481d3e4843baa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:34:51.014093 1829252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 23:34:51.015040 1829252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/kubeconfig: {Name:mkfef6e38d1ebcc45fcbbe766a2ae2945f7bd392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:34:51.015278 1829252 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 23:34:51.015657 1829252 config.go:182] Loaded profile config "old-k8s-version-130660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 23:34:51.015730 1829252 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 23:34:51.015894 1829252 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-130660"
	I0731 23:34:51.015931 1829252 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-130660"
	W0731 23:34:51.015943 1829252 addons.go:243] addon storage-provisioner should already be in state true
	I0731 23:34:51.015955 1829252 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-130660"
	I0731 23:34:51.016020 1829252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-130660"
	I0731 23:34:51.015968 1829252 host.go:66] Checking if "old-k8s-version-130660" exists ...
	I0731 23:34:51.016417 1829252 cli_runner.go:164] Run: docker container inspect old-k8s-version-130660 --format={{.State.Status}}
	I0731 23:34:51.016557 1829252 cli_runner.go:164] Run: docker container inspect old-k8s-version-130660 --format={{.State.Status}}
	I0731 23:34:51.015973 1829252 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-130660"
	I0731 23:34:51.017148 1829252 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-130660"
	W0731 23:34:51.017184 1829252 addons.go:243] addon metrics-server should already be in state true
	I0731 23:34:51.017405 1829252 host.go:66] Checking if "old-k8s-version-130660" exists ...
	I0731 23:34:51.018015 1829252 cli_runner.go:164] Run: docker container inspect old-k8s-version-130660 --format={{.State.Status}}
	I0731 23:34:51.015978 1829252 addons.go:69] Setting dashboard=true in profile "old-k8s-version-130660"
	I0731 23:34:51.019508 1829252 addons.go:234] Setting addon dashboard=true in "old-k8s-version-130660"
	W0731 23:34:51.019522 1829252 addons.go:243] addon dashboard should already be in state true
	I0731 23:34:51.019556 1829252 host.go:66] Checking if "old-k8s-version-130660" exists ...
	I0731 23:34:51.020033 1829252 cli_runner.go:164] Run: docker container inspect old-k8s-version-130660 --format={{.State.Status}}
	I0731 23:34:51.021517 1829252 out.go:177] * Verifying Kubernetes components...
	I0731 23:34:51.028483 1829252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:34:51.056343 1829252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:34:51.058585 1829252 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:34:51.058613 1829252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 23:34:51.058686 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:51.073014 1829252 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-130660"
	W0731 23:34:51.073045 1829252 addons.go:243] addon default-storageclass should already be in state true
	I0731 23:34:51.073074 1829252 host.go:66] Checking if "old-k8s-version-130660" exists ...
	I0731 23:34:51.073549 1829252 cli_runner.go:164] Run: docker container inspect old-k8s-version-130660 --format={{.State.Status}}
	I0731 23:34:51.088668 1829252 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0731 23:34:51.095082 1829252 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0731 23:34:51.096858 1829252 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0731 23:34:51.096886 1829252 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0731 23:34:51.096961 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:51.098973 1829252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 23:34:51.102794 1829252 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 23:34:51.102822 1829252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 23:34:51.102898 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:51.126436 1829252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34971 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/old-k8s-version-130660/id_rsa Username:docker}
	I0731 23:34:51.155770 1829252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34971 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/old-k8s-version-130660/id_rsa Username:docker}
	I0731 23:34:51.158719 1829252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34971 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/old-k8s-version-130660/id_rsa Username:docker}
	I0731 23:34:51.165441 1829252 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 23:34:51.165466 1829252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 23:34:51.165531 1829252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130660
	I0731 23:34:51.202804 1829252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34971 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/old-k8s-version-130660/id_rsa Username:docker}
	I0731 23:34:51.255873 1829252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:34:51.273494 1829252 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-130660" to be "Ready" ...
	I0731 23:34:51.303556 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:34:51.317699 1829252 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0731 23:34:51.317765 1829252 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0731 23:34:51.319146 1829252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 23:34:51.319199 1829252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 23:34:51.353359 1829252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 23:34:51.353435 1829252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 23:34:51.366493 1829252 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0731 23:34:51.366561 1829252 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0731 23:34:51.382428 1829252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 23:34:51.382498 1829252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 23:34:51.390365 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 23:34:51.433226 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 23:34:51.436476 1829252 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0731 23:34:51.436542 1829252 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0731 23:34:51.484191 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:51.484293 1829252 retry.go:31] will retry after 369.379041ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:51.518805 1829252 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0731 23:34:51.518880 1829252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0731 23:34:51.568892 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:51.568981 1829252 retry.go:31] will retry after 349.994928ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0731 23:34:51.575418 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:51.575501 1829252 retry.go:31] will retry after 157.613409ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:51.581956 1829252 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0731 23:34:51.582041 1829252 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0731 23:34:51.601884 1829252 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0731 23:34:51.601914 1829252 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0731 23:34:51.620126 1829252 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0731 23:34:51.620194 1829252 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0731 23:34:51.639546 1829252 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0731 23:34:51.639612 1829252 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0731 23:34:51.658738 1829252 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0731 23:34:51.658762 1829252 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0731 23:34:51.677418 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0731 23:34:51.733676 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0731 23:34:51.750444 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:51.750495 1829252 retry.go:31] will retry after 291.173746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0731 23:34:51.814532 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:51.814566 1829252 retry.go:31] will retry after 411.534813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:51.854698 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:34:51.919507 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0731 23:34:51.924348 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:51.924425 1829252 retry.go:31] will retry after 472.075349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0731 23:34:52.011425 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:52.011504 1829252 retry.go:31] will retry after 428.082989ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:52.042653 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0731 23:34:52.116575 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:52.116656 1829252 retry.go:31] will retry after 480.082374ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:52.226908 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0731 23:34:52.308525 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:52.308606 1829252 retry.go:31] will retry after 653.383075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:52.396741 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:34:52.440044 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0731 23:34:52.478701 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:52.478734 1829252 retry.go:31] will retry after 473.963657ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0731 23:34:52.536070 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:52.536104 1829252 retry.go:31] will retry after 731.504261ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:52.597796 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0731 23:34:52.668242 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:52.668284 1829252 retry.go:31] will retry after 313.156115ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:52.952955 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:34:52.962328 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 23:34:52.981817 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0731 23:34:53.091336 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:53.091415 1829252 retry.go:31] will retry after 1.218785027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0731 23:34:53.091506 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:53.091541 1829252 retry.go:31] will retry after 1.043755261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0731 23:34:53.123253 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:53.123334 1829252 retry.go:31] will retry after 928.941521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:53.268616 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0731 23:34:53.274312 1829252 node_ready.go:53] error getting node "old-k8s-version-130660": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-130660": dial tcp 192.168.76.2:8443: connect: connection refused
	W0731 23:34:53.339765 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:53.339812 1829252 retry.go:31] will retry after 1.096558658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:54.053265 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0731 23:34:54.132421 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:54.132455 1829252 retry.go:31] will retry after 1.122891049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:54.135522 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0731 23:34:54.206612 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:54.206646 1829252 retry.go:31] will retry after 706.786803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:54.311310 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0731 23:34:54.382664 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:54.382707 1829252 retry.go:31] will retry after 1.140551629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:54.436982 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0731 23:34:54.507745 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:54.507781 1829252 retry.go:31] will retry after 1.831122878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:54.914030 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0731 23:34:54.992690 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:54.992762 1829252 retry.go:31] will retry after 1.051771021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:55.256124 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0731 23:34:55.274798 1829252 node_ready.go:53] error getting node "old-k8s-version-130660": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-130660": dial tcp 192.168.76.2:8443: connect: connection refused
	W0731 23:34:55.335284 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:55.335316 1829252 retry.go:31] will retry after 969.985535ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:55.523458 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0731 23:34:55.614999 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:55.615030 1829252 retry.go:31] will retry after 1.226301392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:56.045143 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0731 23:34:56.122917 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:56.122950 1829252 retry.go:31] will retry after 3.547415234s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:56.305776 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0731 23:34:56.339047 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0731 23:34:56.396119 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:56.396151 1829252 retry.go:31] will retry after 1.830283978s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0731 23:34:56.428447 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:56.428480 1829252 retry.go:31] will retry after 1.144135109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:56.841657 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0731 23:34:56.919660 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:56.919694 1829252 retry.go:31] will retry after 3.666014723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:57.573705 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0731 23:34:57.646721 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:57.646753 1829252 retry.go:31] will retry after 2.134642162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:57.774320 1829252 node_ready.go:53] error getting node "old-k8s-version-130660": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-130660": dial tcp 192.168.76.2:8443: connect: connection refused
	I0731 23:34:58.226796 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0731 23:34:58.298068 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:58.298102 1829252 retry.go:31] will retry after 2.154378395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:59.670519 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:34:59.782566 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0731 23:34:59.795306 1829252 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:34:59.795340 1829252 retry.go:31] will retry after 4.804806928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0731 23:35:00.453682 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0731 23:35:00.586654 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 23:35:04.601221 1829252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:35:08.008444 1829252 node_ready.go:49] node "old-k8s-version-130660" has status "Ready":"True"
	I0731 23:35:08.008487 1829252 node_ready.go:38] duration metric: took 16.734896105s for node "old-k8s-version-130660" to be "Ready" ...
	I0731 23:35:08.008501 1829252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:35:08.234129 1829252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-nqxld" in "kube-system" namespace to be "Ready" ...
	I0731 23:35:08.414576 1829252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.631954963s)
	I0731 23:35:08.452124 1829252 pod_ready.go:92] pod "coredns-74ff55c5b-nqxld" in "kube-system" namespace has status "Ready":"True"
	I0731 23:35:08.452150 1829252 pod_ready.go:81] duration metric: took 217.925298ms for pod "coredns-74ff55c5b-nqxld" in "kube-system" namespace to be "Ready" ...
	I0731 23:35:08.452161 1829252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-130660" in "kube-system" namespace to be "Ready" ...
	I0731 23:35:08.583967 1829252 pod_ready.go:92] pod "etcd-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"True"
	I0731 23:35:08.584025 1829252 pod_ready.go:81] duration metric: took 131.844287ms for pod "etcd-old-k8s-version-130660" in "kube-system" namespace to be "Ready" ...
	I0731 23:35:08.584041 1829252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-130660" in "kube-system" namespace to be "Ready" ...
	I0731 23:35:08.646613 1829252 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"True"
	I0731 23:35:08.646636 1829252 pod_ready.go:81] duration metric: took 62.58664ms for pod "kube-apiserver-old-k8s-version-130660" in "kube-system" namespace to be "Ready" ...
	I0731 23:35:08.646649 1829252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace to be "Ready" ...
	I0731 23:35:09.090986 1829252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.504282344s)
	I0731 23:35:09.091035 1829252 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-130660"
	I0731 23:35:09.091086 1829252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.489833647s)
	I0731 23:35:09.091230 1829252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.637483345s)
	I0731 23:35:09.093648 1829252 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-130660 addons enable metrics-server
	
	I0731 23:35:09.095796 1829252 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0731 23:35:09.098143 1829252 addons.go:510] duration metric: took 18.082407739s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0731 23:35:10.652905 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:13.152805 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:15.154232 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:17.155438 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:19.653385 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:21.653737 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:24.153141 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:26.656094 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:29.156445 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:31.656314 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:33.658736 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:36.153237 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:38.154328 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:40.176989 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:42.654057 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:45.159309 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:47.653290 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:49.653341 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:52.152523 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:54.654106 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:56.661924 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:35:59.152990 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:01.652826 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:03.654018 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:06.153149 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:08.153368 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:10.155166 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:12.158720 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:14.653295 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:17.152392 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:19.152728 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:21.153723 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:23.653157 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:26.153464 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:28.652819 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:30.652986 1829252 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:31.152275 1829252 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"True"
	I0731 23:36:31.152302 1829252 pod_ready.go:81] duration metric: took 1m22.505645262s for pod "kube-controller-manager-old-k8s-version-130660" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:31.152314 1829252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vsnfm" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:31.156916 1829252 pod_ready.go:92] pod "kube-proxy-vsnfm" in "kube-system" namespace has status "Ready":"True"
	I0731 23:36:31.156942 1829252 pod_ready.go:81] duration metric: took 4.620923ms for pod "kube-proxy-vsnfm" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:31.156953 1829252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-130660" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:31.162135 1829252 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-130660" in "kube-system" namespace has status "Ready":"True"
	I0731 23:36:31.162158 1829252 pod_ready.go:81] duration metric: took 5.19715ms for pod "kube-scheduler-old-k8s-version-130660" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:31.162170 1829252 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:33.167491 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:35.169793 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:37.668704 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:40.171893 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:42.668624 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:44.670037 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:47.168429 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:49.668952 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:51.669365 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:54.169279 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:56.669441 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:36:59.168537 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:01.668623 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:04.168426 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:06.668001 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:09.168518 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:11.169340 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:13.669231 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:16.167485 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:18.168578 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:20.168924 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:22.668086 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:24.668837 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:27.168490 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:29.668495 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:32.168260 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:34.168394 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:36.678419 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:39.169849 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:41.668283 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:44.167842 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:46.168117 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:48.667986 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:50.668536 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:53.167883 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:55.169044 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:57.667987 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:37:59.668836 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:02.167946 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:04.167994 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:06.671011 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:09.168278 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:11.168735 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:13.174613 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:15.668627 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:17.668843 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:19.670167 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:22.168265 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:24.668802 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:27.168530 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:29.169519 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:31.668442 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:34.167934 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:36.668752 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:39.168385 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:41.169284 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:43.668258 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:45.668387 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:47.669412 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:49.677744 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:52.169320 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:54.669004 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:57.168232 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:38:59.244594 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:01.668742 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:03.670959 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:06.167769 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:08.168483 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:10.169075 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:12.668394 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:14.670193 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:17.167442 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:19.167603 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:21.167785 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:23.168142 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:25.169514 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:27.199210 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:29.670147 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:32.169370 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:34.669368 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:37.168209 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:39.730387 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:42.169022 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:44.668803 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:47.168065 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:49.171344 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:51.670446 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:53.672244 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:56.168225 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:58.670724 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:00.671990 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:03.169169 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:05.169709 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:07.668238 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:09.669577 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:11.669953 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:14.168734 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:16.170059 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:18.668036 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:20.669542 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:23.168832 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:25.169914 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:27.667676 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:29.668497 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:31.168677 1829252 pod_ready.go:81] duration metric: took 4m0.006492497s for pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace to be "Ready" ...
	E0731 23:40:31.168708 1829252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 23:40:31.168720 1829252 pod_ready.go:38] duration metric: took 5m23.160205049s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:40:31.168736 1829252 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:40:31.168766 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 23:40:31.168832 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 23:40:31.210665 1829252 cri.go:89] found id: "46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416"
	I0731 23:40:31.210687 1829252 cri.go:89] found id: ""
	I0731 23:40:31.210695 1829252 logs.go:276] 1 containers: [46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416]
	I0731 23:40:31.210762 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.214275 1829252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 23:40:31.214347 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 23:40:31.254998 1829252 cri.go:89] found id: "6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c"
	I0731 23:40:31.255072 1829252 cri.go:89] found id: ""
	I0731 23:40:31.255096 1829252 logs.go:276] 1 containers: [6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c]
	I0731 23:40:31.255183 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.259162 1829252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 23:40:31.259290 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 23:40:31.306298 1829252 cri.go:89] found id: "de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b"
	I0731 23:40:31.306366 1829252 cri.go:89] found id: ""
	I0731 23:40:31.306386 1829252 logs.go:276] 1 containers: [de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b]
	I0731 23:40:31.306468 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.310127 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 23:40:31.310198 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 23:40:31.350673 1829252 cri.go:89] found id: "9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548"
	I0731 23:40:31.350738 1829252 cri.go:89] found id: ""
	I0731 23:40:31.350753 1829252 logs.go:276] 1 containers: [9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548]
	I0731 23:40:31.350819 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.354414 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 23:40:31.354500 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 23:40:31.396679 1829252 cri.go:89] found id: "c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796"
	I0731 23:40:31.396705 1829252 cri.go:89] found id: ""
	I0731 23:40:31.396712 1829252 logs.go:276] 1 containers: [c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796]
	I0731 23:40:31.396776 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.400231 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 23:40:31.400307 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 23:40:31.453366 1829252 cri.go:89] found id: "6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00"
	I0731 23:40:31.453390 1829252 cri.go:89] found id: ""
	I0731 23:40:31.453398 1829252 logs.go:276] 1 containers: [6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00]
	I0731 23:40:31.453454 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.457018 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 23:40:31.457089 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 23:40:31.495600 1829252 cri.go:89] found id: "bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7"
	I0731 23:40:31.495620 1829252 cri.go:89] found id: ""
	I0731 23:40:31.495628 1829252 logs.go:276] 1 containers: [bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7]
	I0731 23:40:31.495689 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.499263 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 23:40:31.499355 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 23:40:31.538992 1829252 cri.go:89] found id: "d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27"
	I0731 23:40:31.539015 1829252 cri.go:89] found id: ""
	I0731 23:40:31.539023 1829252 logs.go:276] 1 containers: [d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27]
	I0731 23:40:31.539099 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.542861 1829252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 23:40:31.542996 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 23:40:31.594113 1829252 cri.go:89] found id: "af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92"
	I0731 23:40:31.594172 1829252 cri.go:89] found id: "e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1"
	I0731 23:40:31.594192 1829252 cri.go:89] found id: ""
	I0731 23:40:31.594219 1829252 logs.go:276] 2 containers: [af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92 e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1]
	I0731 23:40:31.594308 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.598495 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.601970 1829252 logs.go:123] Gathering logs for dmesg ...
	I0731 23:40:31.601993 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 23:40:31.622722 1829252 logs.go:123] Gathering logs for etcd [6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c] ...
	I0731 23:40:31.622751 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c"
	I0731 23:40:31.669324 1829252 logs.go:123] Gathering logs for kube-proxy [c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796] ...
	I0731 23:40:31.669358 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796"
	I0731 23:40:31.709562 1829252 logs.go:123] Gathering logs for kindnet [bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7] ...
	I0731 23:40:31.709593 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7"
	I0731 23:40:31.760136 1829252 logs.go:123] Gathering logs for storage-provisioner [af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92] ...
	I0731 23:40:31.760180 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92"
	I0731 23:40:31.800595 1829252 logs.go:123] Gathering logs for container status ...
	I0731 23:40:31.800673 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 23:40:31.852654 1829252 logs.go:123] Gathering logs for kubelet ...
	I0731 23:40:31.852732 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 23:40:31.905287 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593291     742 reflector.go:138] object-"kube-system"/"kindnet-token-crzsj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-crzsj" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.905584 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593424     742 reflector.go:138] object-"kube-system"/"storage-provisioner-token-d22vf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-d22vf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.905826 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593481     742 reflector.go:138] object-"kube-system"/"metrics-server-token-kr52c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-kr52c" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.906052 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593534     742 reflector.go:138] object-"kube-system"/"kube-proxy-token-fzgtf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-fzgtf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.906267 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593592     742 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.906486 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593646     742 reflector.go:138] object-"default"/"default-token-4prkn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-4prkn" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.906695 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593719     742 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.906917 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593768     742 reflector.go:138] object-"kube-system"/"coredns-token-6sgwf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-6sgwf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.917757 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:09 old-k8s-version-130660 kubelet[742]: E0731 23:35:09.731660     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:31.918380 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:09 old-k8s-version-130660 kubelet[742]: E0731 23:35:09.980822     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.922205 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:23 old-k8s-version-130660 kubelet[742]: E0731 23:35:23.912962     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:31.923741 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:37 old-k8s-version-130660 kubelet[742]: E0731 23:35:37.900150     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.924088 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:38 old-k8s-version-130660 kubelet[742]: E0731 23:35:38.139768     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.924573 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:39 old-k8s-version-130660 kubelet[742]: E0731 23:35:39.142583     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.925055 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:42 old-k8s-version-130660 kubelet[742]: E0731 23:35:42.939292     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.927208 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:51 old-k8s-version-130660 kubelet[742]: E0731 23:35:51.915245     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:31.927833 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:55 old-k8s-version-130660 kubelet[742]: E0731 23:35:55.174079     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.928027 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:02 old-k8s-version-130660 kubelet[742]: E0731 23:36:02.900602     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.928415 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:02 old-k8s-version-130660 kubelet[742]: E0731 23:36:02.938961     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.928612 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:14 old-k8s-version-130660 kubelet[742]: E0731 23:36:14.900659     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.929257 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:16 old-k8s-version-130660 kubelet[742]: E0731 23:36:16.237810     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.929603 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:22 old-k8s-version-130660 kubelet[742]: E0731 23:36:22.938992     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.929795 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:25 old-k8s-version-130660 kubelet[742]: E0731 23:36:25.900141     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.930137 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:37 old-k8s-version-130660 kubelet[742]: E0731 23:36:37.900048     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.932288 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:37 old-k8s-version-130660 kubelet[742]: E0731 23:36:37.913581     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:31.932483 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:48 old-k8s-version-130660 kubelet[742]: E0731 23:36:48.900800     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.932831 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:50 old-k8s-version-130660 kubelet[742]: E0731 23:36:50.899714     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.933026 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:02 old-k8s-version-130660 kubelet[742]: E0731 23:37:02.901296     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.933664 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:06 old-k8s-version-130660 kubelet[742]: E0731 23:37:06.310165     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.934010 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:12 old-k8s-version-130660 kubelet[742]: E0731 23:37:12.939044     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.934204 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:14 old-k8s-version-130660 kubelet[742]: E0731 23:37:14.900812     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.934549 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:27 old-k8s-version-130660 kubelet[742]: E0731 23:37:27.899533     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.934743 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:28 old-k8s-version-130660 kubelet[742]: E0731 23:37:28.900944     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.935086 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:41 old-k8s-version-130660 kubelet[742]: E0731 23:37:41.899714     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.935276 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:42 old-k8s-version-130660 kubelet[742]: E0731 23:37:42.900065     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.935465 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:54 old-k8s-version-130660 kubelet[742]: E0731 23:37:54.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.935812 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:55 old-k8s-version-130660 kubelet[742]: E0731 23:37:55.899783     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.938272 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:05 old-k8s-version-130660 kubelet[742]: E0731 23:38:05.912798     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:31.939974 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:10 old-k8s-version-130660 kubelet[742]: E0731 23:38:10.899721     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.940195 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:18 old-k8s-version-130660 kubelet[742]: E0731 23:38:18.901024     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.940543 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:21 old-k8s-version-130660 kubelet[742]: E0731 23:38:21.899662     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.940734 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:33 old-k8s-version-130660 kubelet[742]: E0731 23:38:33.900179     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.941390 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:36 old-k8s-version-130660 kubelet[742]: E0731 23:38:36.433618     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.941754 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:42 old-k8s-version-130660 kubelet[742]: E0731 23:38:42.939018     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.941951 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:46 old-k8s-version-130660 kubelet[742]: E0731 23:38:46.900616     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.942315 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:54 old-k8s-version-130660 kubelet[742]: E0731 23:38:54.899746     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.942519 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:01 old-k8s-version-130660 kubelet[742]: E0731 23:39:01.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.942862 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:08 old-k8s-version-130660 kubelet[742]: E0731 23:39:08.899966     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.943053 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:14 old-k8s-version-130660 kubelet[742]: E0731 23:39:14.900213     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.943394 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:20 old-k8s-version-130660 kubelet[742]: E0731 23:39:20.899847     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.943585 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:25 old-k8s-version-130660 kubelet[742]: E0731 23:39:25.900329     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.943933 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:32 old-k8s-version-130660 kubelet[742]: E0731 23:39:32.900100     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.944131 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:36 old-k8s-version-130660 kubelet[742]: E0731 23:39:36.900554     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.944482 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:46 old-k8s-version-130660 kubelet[742]: E0731 23:39:46.899659     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.944678 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:47 old-k8s-version-130660 kubelet[742]: E0731 23:39:47.900629     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.945022 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:57 old-k8s-version-130660 kubelet[742]: E0731 23:39:57.899835     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.945479 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:02 old-k8s-version-130660 kubelet[742]: E0731 23:40:02.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.945825 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:11 old-k8s-version-130660 kubelet[742]: E0731 23:40:11.899717     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.946044 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:16 old-k8s-version-130660 kubelet[742]: E0731 23:40:16.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.946393 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: E0731 23:40:24.899764     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.946587 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:27 old-k8s-version-130660 kubelet[742]: E0731 23:40:27.900206     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0731 23:40:31.946597 1829252 logs.go:123] Gathering logs for kube-scheduler [9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548] ...
	I0731 23:40:31.946613 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548"
	I0731 23:40:31.994035 1829252 logs.go:123] Gathering logs for describe nodes ...
	I0731 23:40:31.994065 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 23:40:32.158567 1829252 logs.go:123] Gathering logs for kube-apiserver [46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416] ...
	I0731 23:40:32.158602 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416"
	I0731 23:40:32.227818 1829252 logs.go:123] Gathering logs for coredns [de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b] ...
	I0731 23:40:32.227860 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b"
	I0731 23:40:32.273447 1829252 logs.go:123] Gathering logs for kube-controller-manager [6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00] ...
	I0731 23:40:32.273478 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00"
	I0731 23:40:32.344093 1829252 logs.go:123] Gathering logs for kubernetes-dashboard [d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27] ...
	I0731 23:40:32.344134 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27"
	I0731 23:40:32.385695 1829252 logs.go:123] Gathering logs for storage-provisioner [e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1] ...
	I0731 23:40:32.385724 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1"
	I0731 23:40:32.435975 1829252 logs.go:123] Gathering logs for CRI-O ...
	I0731 23:40:32.436001 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 23:40:32.522296 1829252 out.go:304] Setting ErrFile to fd 2...
	I0731 23:40:32.522372 1829252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 23:40:32.522460 1829252 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 23:40:32.522500 1829252 out.go:239]   Jul 31 23:40:02 old-k8s-version-130660 kubelet[742]: E0731 23:40:02.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 31 23:40:02 old-k8s-version-130660 kubelet[742]: E0731 23:40:02.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:32.522532 1829252 out.go:239]   Jul 31 23:40:11 old-k8s-version-130660 kubelet[742]: E0731 23:40:11.899717     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	  Jul 31 23:40:11 old-k8s-version-130660 kubelet[742]: E0731 23:40:11.899717     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:32.522583 1829252 out.go:239]   Jul 31 23:40:16 old-k8s-version-130660 kubelet[742]: E0731 23:40:16.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 31 23:40:16 old-k8s-version-130660 kubelet[742]: E0731 23:40:16.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:32.522613 1829252 out.go:239]   Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: E0731 23:40:24.899764     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	  Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: E0731 23:40:24.899764     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:32.522668 1829252 out.go:239]   Jul 31 23:40:27 old-k8s-version-130660 kubelet[742]: E0731 23:40:27.900206     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 31 23:40:27 old-k8s-version-130660 kubelet[742]: E0731 23:40:27.900206     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0731 23:40:32.522702 1829252 out.go:304] Setting ErrFile to fd 2...
	I0731 23:40:32.522720 1829252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:40:42.523167 1829252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:40:42.534963 1829252 api_server.go:72] duration metric: took 5m51.519646281s to wait for apiserver process to appear ...
	I0731 23:40:42.534988 1829252 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:40:42.535022 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 23:40:42.535080 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 23:40:42.573475 1829252 cri.go:89] found id: "46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416"
	I0731 23:40:42.573498 1829252 cri.go:89] found id: ""
	I0731 23:40:42.573507 1829252 logs.go:276] 1 containers: [46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416]
	I0731 23:40:42.573565 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.577141 1829252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 23:40:42.577216 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 23:40:42.618294 1829252 cri.go:89] found id: "6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c"
	I0731 23:40:42.618317 1829252 cri.go:89] found id: ""
	I0731 23:40:42.618325 1829252 logs.go:276] 1 containers: [6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c]
	I0731 23:40:42.618380 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.622074 1829252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 23:40:42.622146 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 23:40:42.661456 1829252 cri.go:89] found id: "de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b"
	I0731 23:40:42.661478 1829252 cri.go:89] found id: ""
	I0731 23:40:42.661486 1829252 logs.go:276] 1 containers: [de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b]
	I0731 23:40:42.661547 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.665157 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 23:40:42.665221 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 23:40:42.701128 1829252 cri.go:89] found id: "9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548"
	I0731 23:40:42.701152 1829252 cri.go:89] found id: ""
	I0731 23:40:42.701160 1829252 logs.go:276] 1 containers: [9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548]
	I0731 23:40:42.701216 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.704806 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 23:40:42.704875 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 23:40:42.741987 1829252 cri.go:89] found id: "c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796"
	I0731 23:40:42.742008 1829252 cri.go:89] found id: ""
	I0731 23:40:42.742017 1829252 logs.go:276] 1 containers: [c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796]
	I0731 23:40:42.742095 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.745798 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 23:40:42.745867 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 23:40:42.789352 1829252 cri.go:89] found id: "6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00"
	I0731 23:40:42.789377 1829252 cri.go:89] found id: ""
	I0731 23:40:42.789384 1829252 logs.go:276] 1 containers: [6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00]
	I0731 23:40:42.789443 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.793051 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 23:40:42.793147 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 23:40:42.830015 1829252 cri.go:89] found id: "bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7"
	I0731 23:40:42.830034 1829252 cri.go:89] found id: ""
	I0731 23:40:42.830042 1829252 logs.go:276] 1 containers: [bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7]
	I0731 23:40:42.830096 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.833643 1829252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 23:40:42.833716 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 23:40:42.873447 1829252 cri.go:89] found id: "af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92"
	I0731 23:40:42.873469 1829252 cri.go:89] found id: "e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1"
	I0731 23:40:42.873474 1829252 cri.go:89] found id: ""
	I0731 23:40:42.873481 1829252 logs.go:276] 2 containers: [af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92 e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1]
	I0731 23:40:42.873535 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.877038 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.880423 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 23:40:42.880533 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 23:40:42.919417 1829252 cri.go:89] found id: "d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27"
	I0731 23:40:42.919441 1829252 cri.go:89] found id: ""
	I0731 23:40:42.919448 1829252 logs.go:276] 1 containers: [d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27]
	I0731 23:40:42.919510 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.923555 1829252 logs.go:123] Gathering logs for dmesg ...
	I0731 23:40:42.923587 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 23:40:42.942311 1829252 logs.go:123] Gathering logs for kube-controller-manager [6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00] ...
	I0731 23:40:42.942342 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00"
	I0731 23:40:43.024646 1829252 logs.go:123] Gathering logs for kindnet [bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7] ...
	I0731 23:40:43.024684 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7"
	I0731 23:40:43.086681 1829252 logs.go:123] Gathering logs for CRI-O ...
	I0731 23:40:43.086711 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 23:40:43.174577 1829252 logs.go:123] Gathering logs for container status ...
	I0731 23:40:43.174613 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 23:40:43.216374 1829252 logs.go:123] Gathering logs for kube-apiserver [46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416] ...
	I0731 23:40:43.216401 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416"
	I0731 23:40:43.291210 1829252 logs.go:123] Gathering logs for storage-provisioner [af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92] ...
	I0731 23:40:43.291247 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92"
	I0731 23:40:43.333618 1829252 logs.go:123] Gathering logs for storage-provisioner [e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1] ...
	I0731 23:40:43.333647 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1"
	I0731 23:40:43.373657 1829252 logs.go:123] Gathering logs for kubernetes-dashboard [d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27] ...
	I0731 23:40:43.373686 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27"
	I0731 23:40:43.418607 1829252 logs.go:123] Gathering logs for etcd [6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c] ...
	I0731 23:40:43.418636 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c"
	I0731 23:40:43.484482 1829252 logs.go:123] Gathering logs for kube-scheduler [9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548] ...
	I0731 23:40:43.484516 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548"
	I0731 23:40:43.544591 1829252 logs.go:123] Gathering logs for kube-proxy [c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796] ...
	I0731 23:40:43.544623 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796"
	I0731 23:40:43.622041 1829252 logs.go:123] Gathering logs for kubelet ...
	I0731 23:40:43.622069 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 23:40:43.694432 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593291     742 reflector.go:138] object-"kube-system"/"kindnet-token-crzsj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-crzsj" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.694779 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593424     742 reflector.go:138] object-"kube-system"/"storage-provisioner-token-d22vf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-d22vf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.695038 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593481     742 reflector.go:138] object-"kube-system"/"metrics-server-token-kr52c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-kr52c" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.695680 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593534     742 reflector.go:138] object-"kube-system"/"kube-proxy-token-fzgtf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-fzgtf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.695993 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593592     742 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.697348 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593646     742 reflector.go:138] object-"default"/"default-token-4prkn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-4prkn" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.697624 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593719     742 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.697848 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593768     742 reflector.go:138] object-"kube-system"/"coredns-token-6sgwf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-6sgwf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.710124 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:09 old-k8s-version-130660 kubelet[742]: E0731 23:35:09.731660     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:43.710762 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:09 old-k8s-version-130660 kubelet[742]: E0731 23:35:09.980822     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.714635 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:23 old-k8s-version-130660 kubelet[742]: E0731 23:35:23.912962     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:43.716251 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:37 old-k8s-version-130660 kubelet[742]: E0731 23:35:37.900150     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.716594 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:38 old-k8s-version-130660 kubelet[742]: E0731 23:35:38.139768     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.717077 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:39 old-k8s-version-130660 kubelet[742]: E0731 23:35:39.142583     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.717609 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:42 old-k8s-version-130660 kubelet[742]: E0731 23:35:42.939292     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.719764 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:51 old-k8s-version-130660 kubelet[742]: E0731 23:35:51.915245     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:43.720376 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:55 old-k8s-version-130660 kubelet[742]: E0731 23:35:55.174079     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.720568 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:02 old-k8s-version-130660 kubelet[742]: E0731 23:36:02.900602     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.720909 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:02 old-k8s-version-130660 kubelet[742]: E0731 23:36:02.938961     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.721129 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:14 old-k8s-version-130660 kubelet[742]: E0731 23:36:14.900659     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.721793 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:16 old-k8s-version-130660 kubelet[742]: E0731 23:36:16.237810     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.722238 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:22 old-k8s-version-130660 kubelet[742]: E0731 23:36:22.938992     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.722440 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:25 old-k8s-version-130660 kubelet[742]: E0731 23:36:25.900141     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.722784 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:37 old-k8s-version-130660 kubelet[742]: E0731 23:36:37.900048     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.726823 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:37 old-k8s-version-130660 kubelet[742]: E0731 23:36:37.913581     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:43.727038 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:48 old-k8s-version-130660 kubelet[742]: E0731 23:36:48.900800     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.727383 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:50 old-k8s-version-130660 kubelet[742]: E0731 23:36:50.899714     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.727574 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:02 old-k8s-version-130660 kubelet[742]: E0731 23:37:02.901296     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.728202 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:06 old-k8s-version-130660 kubelet[742]: E0731 23:37:06.310165     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.728542 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:12 old-k8s-version-130660 kubelet[742]: E0731 23:37:12.939044     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.728737 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:14 old-k8s-version-130660 kubelet[742]: E0731 23:37:14.900812     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.729312 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:27 old-k8s-version-130660 kubelet[742]: E0731 23:37:27.899533     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.729512 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:28 old-k8s-version-130660 kubelet[742]: E0731 23:37:28.900944     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.729863 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:41 old-k8s-version-130660 kubelet[742]: E0731 23:37:41.899714     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.730056 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:42 old-k8s-version-130660 kubelet[742]: E0731 23:37:42.900065     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.730313 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:54 old-k8s-version-130660 kubelet[742]: E0731 23:37:54.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.730658 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:55 old-k8s-version-130660 kubelet[742]: E0731 23:37:55.899783     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.732864 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:05 old-k8s-version-130660 kubelet[742]: E0731 23:38:05.912798     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:43.733822 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:10 old-k8s-version-130660 kubelet[742]: E0731 23:38:10.899721     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.734056 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:18 old-k8s-version-130660 kubelet[742]: E0731 23:38:18.901024     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.734397 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:21 old-k8s-version-130660 kubelet[742]: E0731 23:38:21.899662     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.734588 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:33 old-k8s-version-130660 kubelet[742]: E0731 23:38:33.900179     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.735701 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:36 old-k8s-version-130660 kubelet[742]: E0731 23:38:36.433618     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.736069 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:42 old-k8s-version-130660 kubelet[742]: E0731 23:38:42.939018     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.736263 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:46 old-k8s-version-130660 kubelet[742]: E0731 23:38:46.900616     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.736604 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:54 old-k8s-version-130660 kubelet[742]: E0731 23:38:54.899746     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.736845 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:01 old-k8s-version-130660 kubelet[742]: E0731 23:39:01.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.737255 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:08 old-k8s-version-130660 kubelet[742]: E0731 23:39:08.899966     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.737462 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:14 old-k8s-version-130660 kubelet[742]: E0731 23:39:14.900213     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.737829 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:20 old-k8s-version-130660 kubelet[742]: E0731 23:39:20.899847     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.738043 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:25 old-k8s-version-130660 kubelet[742]: E0731 23:39:25.900329     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.738420 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:32 old-k8s-version-130660 kubelet[742]: E0731 23:39:32.900100     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.738641 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:36 old-k8s-version-130660 kubelet[742]: E0731 23:39:36.900554     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.739002 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:46 old-k8s-version-130660 kubelet[742]: E0731 23:39:46.899659     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.739207 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:47 old-k8s-version-130660 kubelet[742]: E0731 23:39:47.900629     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.739595 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:57 old-k8s-version-130660 kubelet[742]: E0731 23:39:57.899835     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.740089 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:02 old-k8s-version-130660 kubelet[742]: E0731 23:40:02.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.740457 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:11 old-k8s-version-130660 kubelet[742]: E0731 23:40:11.899717     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.740670 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:16 old-k8s-version-130660 kubelet[742]: E0731 23:40:16.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.741031 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: E0731 23:40:24.899764     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.741253 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:27 old-k8s-version-130660 kubelet[742]: E0731 23:40:27.900206     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.741628 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:37 old-k8s-version-130660 kubelet[742]: E0731 23:40:37.900394     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.741830 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:38 old-k8s-version-130660 kubelet[742]: E0731 23:40:38.900915     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0731 23:40:43.741846 1829252 logs.go:123] Gathering logs for describe nodes ...
	I0731 23:40:43.741863 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 23:40:43.924872 1829252 logs.go:123] Gathering logs for coredns [de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b] ...
	I0731 23:40:43.924913 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b"
	I0731 23:40:43.993229 1829252 out.go:304] Setting ErrFile to fd 2...
	I0731 23:40:43.993305 1829252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 23:40:43.993387 1829252 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 23:40:43.993430 1829252 out.go:239]   Jul 31 23:40:16 old-k8s-version-130660 kubelet[742]: E0731 23:40:16.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 31 23:40:16 old-k8s-version-130660 kubelet[742]: E0731 23:40:16.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.993462 1829252 out.go:239]   Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: E0731 23:40:24.899764     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	  Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: E0731 23:40:24.899764     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.993512 1829252 out.go:239]   Jul 31 23:40:27 old-k8s-version-130660 kubelet[742]: E0731 23:40:27.900206     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 31 23:40:27 old-k8s-version-130660 kubelet[742]: E0731 23:40:27.900206     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.993545 1829252 out.go:239]   Jul 31 23:40:37 old-k8s-version-130660 kubelet[742]: E0731 23:40:37.900394     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	  Jul 31 23:40:37 old-k8s-version-130660 kubelet[742]: E0731 23:40:37.900394     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.993586 1829252 out.go:239]   Jul 31 23:40:38 old-k8s-version-130660 kubelet[742]: E0731 23:40:38.900915     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 31 23:40:38 old-k8s-version-130660 kubelet[742]: E0731 23:40:38.900915     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0731 23:40:43.993620 1829252 out.go:304] Setting ErrFile to fd 2...
	I0731 23:40:43.993640 1829252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:40:53.995026 1829252 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0731 23:40:54.009309 1829252 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0731 23:40:54.011797 1829252 out.go:177] 
	W0731 23:40:54.013632 1829252 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0731 23:40:54.013674 1829252 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0731 23:40:54.013694 1829252 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0731 23:40:54.013700 1829252 out.go:239] * 
	* 
	W0731 23:40:54.014777 1829252 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 23:40:54.018048 1829252 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-130660 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-130660
helpers_test.go:235: (dbg) docker inspect old-k8s-version-130660:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "113aa41627b627283271b1441f6762c7553c9518cd510a6ea5aeb3e393c5884b",
	        "Created": "2024-07-31T23:31:45.08271664Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1829466,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-31T23:34:43.21527362Z",
	            "FinishedAt": "2024-07-31T23:34:41.224241335Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/113aa41627b627283271b1441f6762c7553c9518cd510a6ea5aeb3e393c5884b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/113aa41627b627283271b1441f6762c7553c9518cd510a6ea5aeb3e393c5884b/hostname",
	        "HostsPath": "/var/lib/docker/containers/113aa41627b627283271b1441f6762c7553c9518cd510a6ea5aeb3e393c5884b/hosts",
	        "LogPath": "/var/lib/docker/containers/113aa41627b627283271b1441f6762c7553c9518cd510a6ea5aeb3e393c5884b/113aa41627b627283271b1441f6762c7553c9518cd510a6ea5aeb3e393c5884b-json.log",
	        "Name": "/old-k8s-version-130660",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-130660:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-130660",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f19ddf8372dcb83a3355044ece73dd52a1c30025dd4af40f52a993d25961f03-init/diff:/var/lib/docker/overlay2/a3c8edb55465dd5b1044de542fb24c31e00154ba5ba4e9841112d37a01d06a98/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f19ddf8372dcb83a3355044ece73dd52a1c30025dd4af40f52a993d25961f03/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f19ddf8372dcb83a3355044ece73dd52a1c30025dd4af40f52a993d25961f03/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f19ddf8372dcb83a3355044ece73dd52a1c30025dd4af40f52a993d25961f03/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-130660",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-130660/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-130660",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-130660",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-130660",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2e670b72156ddd331573c487209fc7eaa98204c9ca1bcb6728c714f9a1c0ceea",
	            "SandboxKey": "/var/run/docker/netns/2e670b72156d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34971"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34972"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34975"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34974"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-130660": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ad2d90793e0afb82e8d6c67a260b43aa4c527904541179831a171f65e50d2bea",
	                    "EndpointID": "930abddff8199ae77386e9f06da4e1453e9588a4fb29ba7be6e0a896b3959db0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-130660",
	                        "113aa41627b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-130660 -n old-k8s-version-130660
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-130660 logs -n 25
E0731 23:40:55.295443 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-130660 logs -n 25: (1.458857025s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-570273 sudo cat                              | bridge-570273          | jenkins | v1.33.1 | 31 Jul 24 23:32 UTC | 31 Jul 24 23:32 UTC |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-570273 sudo cat                              | bridge-570273          | jenkins | v1.33.1 | 31 Jul 24 23:32 UTC | 31 Jul 24 23:32 UTC |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-570273 sudo                                  | bridge-570273          | jenkins | v1.33.1 | 31 Jul 24 23:32 UTC | 31 Jul 24 23:32 UTC |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-570273 sudo                                  | bridge-570273          | jenkins | v1.33.1 | 31 Jul 24 23:32 UTC | 31 Jul 24 23:32 UTC |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-570273 sudo                                  | bridge-570273          | jenkins | v1.33.1 | 31 Jul 24 23:32 UTC | 31 Jul 24 23:32 UTC |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-570273 sudo find                             | bridge-570273          | jenkins | v1.33.1 | 31 Jul 24 23:32 UTC | 31 Jul 24 23:32 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-570273 sudo crio                             | bridge-570273          | jenkins | v1.33.1 | 31 Jul 24 23:32 UTC | 31 Jul 24 23:32 UTC |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-570273                                       | bridge-570273          | jenkins | v1.33.1 | 31 Jul 24 23:32 UTC | 31 Jul 24 23:32 UTC |
	| start   | -p no-preload-637585 --memory=2200                     | no-preload-637585      | jenkins | v1.33.1 | 31 Jul 24 23:32 UTC | 31 Jul 24 23:33 UTC |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=docker                        |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-637585             | no-preload-637585      | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-637585                                   | no-preload-637585      | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:34 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-637585                  | no-preload-637585      | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-637585 --memory=2200                     | no-preload-637585      | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:39 UTC |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=docker                        |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-130660        | old-k8s-version-130660 | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-130660                              | old-k8s-version-130660 | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:34 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-130660             | old-k8s-version-130660 | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-130660                              | old-k8s-version-130660 | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                               |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	| image   | no-preload-637585 image list                           | no-preload-637585      | jenkins | v1.33.1 | 31 Jul 24 23:39 UTC | 31 Jul 24 23:39 UTC |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p no-preload-637585                                   | no-preload-637585      | jenkins | v1.33.1 | 31 Jul 24 23:39 UTC | 31 Jul 24 23:39 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-637585                                   | no-preload-637585      | jenkins | v1.33.1 | 31 Jul 24 23:39 UTC | 31 Jul 24 23:39 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-637585                                   | no-preload-637585      | jenkins | v1.33.1 | 31 Jul 24 23:39 UTC | 31 Jul 24 23:39 UTC |
	| delete  | -p no-preload-637585                                   | no-preload-637585      | jenkins | v1.33.1 | 31 Jul 24 23:39 UTC | 31 Jul 24 23:39 UTC |
	| start   | -p embed-certs-442076                                  | embed-certs-442076     | jenkins | v1.33.1 | 31 Jul 24 23:39 UTC | 31 Jul 24 23:40 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-442076            | embed-certs-442076     | jenkins | v1.33.1 | 31 Jul 24 23:40 UTC | 31 Jul 24 23:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-442076                                  | embed-certs-442076     | jenkins | v1.33.1 | 31 Jul 24 23:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 23:39:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 23:39:33.490791 1834111 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:39:33.490966 1834111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:39:33.490975 1834111 out.go:304] Setting ErrFile to fd 2...
	I0731 23:39:33.490982 1834111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:39:33.491207 1834111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 23:39:33.491591 1834111 out.go:298] Setting JSON to false
	I0731 23:39:33.492611 1834111 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26512,"bootTime":1722442662,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 23:39:33.492679 1834111 start.go:139] virtualization:  
	I0731 23:39:33.495829 1834111 out.go:177] * [embed-certs-442076] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0731 23:39:33.498529 1834111 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 23:39:33.499508 1834111 notify.go:220] Checking for updates...
	I0731 23:39:33.502491 1834111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:39:33.504502 1834111 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 23:39:33.506721 1834111 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	I0731 23:39:33.508677 1834111 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 23:39:33.510568 1834111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:39:33.512834 1834111 config.go:182] Loaded profile config "old-k8s-version-130660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 23:39:33.512932 1834111 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:39:33.535358 1834111 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0731 23:39:33.535480 1834111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 23:39:33.612215 1834111 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-31 23:39:33.601736505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 23:39:33.613344 1834111 docker.go:307] overlay module found
	I0731 23:39:33.615366 1834111 out.go:177] * Using the docker driver based on user configuration
	I0731 23:39:33.617330 1834111 start.go:297] selected driver: docker
	I0731 23:39:33.617347 1834111 start.go:901] validating driver "docker" against <nil>
	I0731 23:39:33.617362 1834111 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:39:33.618012 1834111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 23:39:33.682348 1834111 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-31 23:39:33.6727252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 23:39:33.682533 1834111 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 23:39:33.682782 1834111 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:39:33.685166 1834111 out.go:177] * Using Docker driver with root privileges
	I0731 23:39:33.687414 1834111 cni.go:84] Creating CNI manager for ""
	I0731 23:39:33.687439 1834111 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 23:39:33.687451 1834111 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 23:39:33.687551 1834111 start.go:340] cluster config:
	{Name:embed-certs-442076 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-442076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:39:33.689851 1834111 out.go:177] * Starting "embed-certs-442076" primary control-plane node in "embed-certs-442076" cluster
	I0731 23:39:33.692164 1834111 cache.go:121] Beginning downloading kic base image for docker with crio
	I0731 23:39:33.694389 1834111 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0731 23:39:33.696643 1834111 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 23:39:33.696698 1834111 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0731 23:39:33.696710 1834111 cache.go:56] Caching tarball of preloaded images
	I0731 23:39:33.696768 1834111 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 23:39:33.696793 1834111 preload.go:172] Found /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0731 23:39:33.696803 1834111 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 23:39:33.696908 1834111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/config.json ...
	I0731 23:39:33.696925 1834111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/config.json: {Name:mk05c497e83cce4dbc0b049befb1cc6b6da976a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0731 23:39:33.716938 1834111 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0731 23:39:33.716964 1834111 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 23:39:33.717060 1834111 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 23:39:33.717082 1834111 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 23:39:33.717087 1834111 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 23:39:33.717128 1834111 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 23:39:33.717135 1834111 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0731 23:39:33.846435 1834111 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0731 23:39:33.846471 1834111 cache.go:194] Successfully downloaded all kic artifacts
	I0731 23:39:33.846513 1834111 start.go:360] acquireMachinesLock for embed-certs-442076: {Name:mk55f3a1ee9136bd0ac98e642686547e3861f047 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:39:33.847239 1834111 start.go:364] duration metric: took 699.575µs to acquireMachinesLock for "embed-certs-442076"
	I0731 23:39:33.847280 1834111 start.go:93] Provisioning new machine with config: &{Name:embed-certs-442076 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-442076 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 23:39:33.847370 1834111 start.go:125] createHost starting for "" (driver="docker")
	I0731 23:39:34.669368 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:37.168209 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:33.851251 1834111 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 23:39:33.851495 1834111 start.go:159] libmachine.API.Create for "embed-certs-442076" (driver="docker")
	I0731 23:39:33.851538 1834111 client.go:168] LocalClient.Create starting
	I0731 23:39:33.851623 1834111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem
	I0731 23:39:33.851663 1834111 main.go:141] libmachine: Decoding PEM data...
	I0731 23:39:33.851683 1834111 main.go:141] libmachine: Parsing certificate...
	I0731 23:39:33.851745 1834111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem
	I0731 23:39:33.851768 1834111 main.go:141] libmachine: Decoding PEM data...
	I0731 23:39:33.851793 1834111 main.go:141] libmachine: Parsing certificate...
	I0731 23:39:33.852177 1834111 cli_runner.go:164] Run: docker network inspect embed-certs-442076 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 23:39:33.867053 1834111 cli_runner.go:211] docker network inspect embed-certs-442076 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 23:39:33.867155 1834111 network_create.go:284] running [docker network inspect embed-certs-442076] to gather additional debugging logs...
	I0731 23:39:33.867176 1834111 cli_runner.go:164] Run: docker network inspect embed-certs-442076
	W0731 23:39:33.882442 1834111 cli_runner.go:211] docker network inspect embed-certs-442076 returned with exit code 1
	I0731 23:39:33.882482 1834111 network_create.go:287] error running [docker network inspect embed-certs-442076]: docker network inspect embed-certs-442076: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-442076 not found
	I0731 23:39:33.882495 1834111 network_create.go:289] output of [docker network inspect embed-certs-442076]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-442076 not found
	
	** /stderr **
	I0731 23:39:33.882600 1834111 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 23:39:33.899486 1834111 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6f1e4293cdc4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:71:39:fb:ee} reservation:<nil>}
	I0731 23:39:33.899867 1834111 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81a9ef350509 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:a6:64:66:d4} reservation:<nil>}
	I0731 23:39:33.900290 1834111 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9077c8841468 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:5f:f6:fa:5c} reservation:<nil>}
	I0731 23:39:33.900706 1834111 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ad2d90793e0a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:04:19:5e:26} reservation:<nil>}
	I0731 23:39:33.901265 1834111 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001843f10}
	I0731 23:39:33.901306 1834111 network_create.go:124] attempt to create docker network embed-certs-442076 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0731 23:39:33.901402 1834111 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-442076 embed-certs-442076
	I0731 23:39:33.970523 1834111 network_create.go:108] docker network embed-certs-442076 192.168.85.0/24 created
	I0731 23:39:33.970564 1834111 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-442076" container
	I0731 23:39:33.970655 1834111 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 23:39:33.991583 1834111 cli_runner.go:164] Run: docker volume create embed-certs-442076 --label name.minikube.sigs.k8s.io=embed-certs-442076 --label created_by.minikube.sigs.k8s.io=true
	I0731 23:39:34.021559 1834111 oci.go:103] Successfully created a docker volume embed-certs-442076
	I0731 23:39:34.021671 1834111 cli_runner.go:164] Run: docker run --rm --name embed-certs-442076-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-442076 --entrypoint /usr/bin/test -v embed-certs-442076:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 23:39:34.722484 1834111 oci.go:107] Successfully prepared a docker volume embed-certs-442076
	I0731 23:39:34.722535 1834111 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 23:39:34.722558 1834111 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 23:39:34.722648 1834111 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-442076:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 23:39:39.730387 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:42.169022 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:40.561849 1834111 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-442076:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (5.839160829s)
	I0731 23:39:40.561885 1834111 kic.go:203] duration metric: took 5.839323192s to extract preloaded images to volume ...
	W0731 23:39:40.562028 1834111 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 23:39:40.562145 1834111 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 23:39:40.615978 1834111 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-442076 --name embed-certs-442076 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-442076 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-442076 --network embed-certs-442076 --ip 192.168.85.2 --volume embed-certs-442076:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0731 23:39:40.956487 1834111 cli_runner.go:164] Run: docker container inspect embed-certs-442076 --format={{.State.Running}}
	I0731 23:39:40.979148 1834111 cli_runner.go:164] Run: docker container inspect embed-certs-442076 --format={{.State.Status}}
	I0731 23:39:41.017834 1834111 cli_runner.go:164] Run: docker exec embed-certs-442076 stat /var/lib/dpkg/alternatives/iptables
	I0731 23:39:41.081998 1834111 oci.go:144] the created container "embed-certs-442076" has a running status.
	I0731 23:39:41.082028 1834111 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/embed-certs-442076/id_rsa...
	I0731 23:39:41.329175 1834111 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/embed-certs-442076/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 23:39:41.349474 1834111 cli_runner.go:164] Run: docker container inspect embed-certs-442076 --format={{.State.Status}}
	I0731 23:39:41.384125 1834111 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 23:39:41.384145 1834111 kic_runner.go:114] Args: [docker exec --privileged embed-certs-442076 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 23:39:41.454898 1834111 cli_runner.go:164] Run: docker container inspect embed-certs-442076 --format={{.State.Status}}
	I0731 23:39:41.475627 1834111 machine.go:94] provisionDockerMachine start ...
	I0731 23:39:41.475712 1834111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-442076
	I0731 23:39:41.496548 1834111 main.go:141] libmachine: Using SSH client type: native
	I0731 23:39:41.496819 1834111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34976 <nil> <nil>}
	I0731 23:39:41.496828 1834111 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:39:41.497554 1834111 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51278->127.0.0.1:34976: read: connection reset by peer
	I0731 23:39:44.668803 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:47.168065 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:44.632485 1834111 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-442076
	
	I0731 23:39:44.632512 1834111 ubuntu.go:169] provisioning hostname "embed-certs-442076"
	I0731 23:39:44.632578 1834111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-442076
	I0731 23:39:44.649708 1834111 main.go:141] libmachine: Using SSH client type: native
	I0731 23:39:44.649961 1834111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34976 <nil> <nil>}
	I0731 23:39:44.649979 1834111 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-442076 && echo "embed-certs-442076" | sudo tee /etc/hostname
	I0731 23:39:44.801489 1834111 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-442076
	
	I0731 23:39:44.801566 1834111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-442076
	I0731 23:39:44.819758 1834111 main.go:141] libmachine: Using SSH client type: native
	I0731 23:39:44.820012 1834111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34976 <nil> <nil>}
	I0731 23:39:44.820028 1834111 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-442076' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-442076/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-442076' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:39:44.961859 1834111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:39:44.961887 1834111 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1579223/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1579223/.minikube}
	I0731 23:39:44.961915 1834111 ubuntu.go:177] setting up certificates
	I0731 23:39:44.961924 1834111 provision.go:84] configureAuth start
	I0731 23:39:44.961987 1834111 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-442076
	I0731 23:39:44.979355 1834111 provision.go:143] copyHostCerts
	I0731 23:39:44.979430 1834111 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1579223/.minikube/cert.pem, removing ...
	I0731 23:39:44.979444 1834111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1579223/.minikube/cert.pem
	I0731 23:39:44.979527 1834111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/cert.pem (1123 bytes)
	I0731 23:39:44.979665 1834111 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1579223/.minikube/key.pem, removing ...
	I0731 23:39:44.979675 1834111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1579223/.minikube/key.pem
	I0731 23:39:44.979703 1834111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/key.pem (1679 bytes)
	I0731 23:39:44.979777 1834111 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.pem, removing ...
	I0731 23:39:44.979787 1834111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.pem
	I0731 23:39:44.979814 1834111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.pem (1082 bytes)
	I0731 23:39:44.979877 1834111 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem org=jenkins.embed-certs-442076 san=[127.0.0.1 192.168.85.2 embed-certs-442076 localhost minikube]
	I0731 23:39:45.275615 1834111 provision.go:177] copyRemoteCerts
	I0731 23:39:45.275702 1834111 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:39:45.275782 1834111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-442076
	I0731 23:39:45.296278 1834111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34976 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/embed-certs-442076/id_rsa Username:docker}
	I0731 23:39:45.402917 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 23:39:45.431419 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 23:39:45.460670 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 23:39:45.488937 1834111 provision.go:87] duration metric: took 526.998981ms to configureAuth
	I0731 23:39:45.488964 1834111 ubuntu.go:193] setting minikube options for container-runtime
	I0731 23:39:45.489237 1834111 config.go:182] Loaded profile config "embed-certs-442076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:39:45.489351 1834111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-442076
	I0731 23:39:45.505573 1834111 main.go:141] libmachine: Using SSH client type: native
	I0731 23:39:45.505837 1834111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34976 <nil> <nil>}
	I0731 23:39:45.505859 1834111 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 23:39:45.852654 1834111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 23:39:45.852726 1834111 machine.go:97] duration metric: took 4.377080669s to provisionDockerMachine
	I0731 23:39:45.852751 1834111 client.go:171] duration metric: took 12.001198816s to LocalClient.Create
	I0731 23:39:45.852784 1834111 start.go:167] duration metric: took 12.001289515s to libmachine.API.Create "embed-certs-442076"
	I0731 23:39:45.852827 1834111 start.go:293] postStartSetup for "embed-certs-442076" (driver="docker")
	I0731 23:39:45.852854 1834111 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:39:45.852966 1834111 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:39:45.853041 1834111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-442076
	I0731 23:39:45.869715 1834111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34976 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/embed-certs-442076/id_rsa Username:docker}
	I0731 23:39:45.967760 1834111 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:39:45.971677 1834111 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 23:39:45.971711 1834111 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 23:39:45.971721 1834111 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 23:39:45.971728 1834111 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0731 23:39:45.971748 1834111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1579223/.minikube/addons for local assets ...
	I0731 23:39:45.971817 1834111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1579223/.minikube/files for local assets ...
	I0731 23:39:45.971924 1834111 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1579223/.minikube/files/etc/ssl/certs/15846152.pem -> 15846152.pem in /etc/ssl/certs
	I0731 23:39:45.972036 1834111 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:39:45.984638 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/files/etc/ssl/certs/15846152.pem --> /etc/ssl/certs/15846152.pem (1708 bytes)
	I0731 23:39:46.013796 1834111 start.go:296] duration metric: took 160.938288ms for postStartSetup
	I0731 23:39:46.014230 1834111 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-442076
	I0731 23:39:46.031953 1834111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/config.json ...
	I0731 23:39:46.032258 1834111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 23:39:46.032310 1834111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-442076
	I0731 23:39:46.048937 1834111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34976 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/embed-certs-442076/id_rsa Username:docker}
	I0731 23:39:46.141995 1834111 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 23:39:46.146593 1834111 start.go:128] duration metric: took 12.299205507s to createHost
	I0731 23:39:46.146657 1834111 start.go:83] releasing machines lock for "embed-certs-442076", held for 12.299400575s
	I0731 23:39:46.146754 1834111 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-442076
	I0731 23:39:46.165792 1834111 ssh_runner.go:195] Run: cat /version.json
	I0731 23:39:46.165844 1834111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-442076
	I0731 23:39:46.166270 1834111 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 23:39:46.166348 1834111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-442076
	I0731 23:39:46.187484 1834111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34976 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/embed-certs-442076/id_rsa Username:docker}
	I0731 23:39:46.190008 1834111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34976 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/embed-certs-442076/id_rsa Username:docker}
	I0731 23:39:46.276389 1834111 ssh_runner.go:195] Run: systemctl --version
	I0731 23:39:46.408885 1834111 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 23:39:46.558925 1834111 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 23:39:46.563912 1834111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:39:46.587932 1834111 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 23:39:46.588053 1834111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:39:46.628244 1834111 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 23:39:46.628269 1834111 start.go:495] detecting cgroup driver to use...
	I0731 23:39:46.628302 1834111 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0731 23:39:46.628366 1834111 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:39:46.644686 1834111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:39:46.656852 1834111 docker.go:217] disabling cri-docker service (if available) ...
	I0731 23:39:46.656927 1834111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 23:39:46.676752 1834111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 23:39:46.695070 1834111 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 23:39:46.798268 1834111 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 23:39:46.904476 1834111 docker.go:233] disabling docker service ...
	I0731 23:39:46.904571 1834111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 23:39:46.928879 1834111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 23:39:46.941888 1834111 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 23:39:47.039788 1834111 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 23:39:47.132523 1834111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 23:39:47.146022 1834111 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:39:47.165468 1834111 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 23:39:47.165587 1834111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:39:47.178974 1834111 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 23:39:47.179092 1834111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:39:47.189078 1834111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:39:47.199268 1834111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:39:47.210136 1834111 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:39:47.220049 1834111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:39:47.231555 1834111 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:39:47.248492 1834111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:39:47.260541 1834111 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:39:47.269253 1834111 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:39:47.278124 1834111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:39:47.361840 1834111 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 23:39:47.482928 1834111 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 23:39:47.483051 1834111 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 23:39:47.487267 1834111 start.go:563] Will wait 60s for crictl version
	I0731 23:39:47.487376 1834111 ssh_runner.go:195] Run: which crictl
	I0731 23:39:47.490738 1834111 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:39:47.531621 1834111 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 23:39:47.531786 1834111 ssh_runner.go:195] Run: crio --version
	I0731 23:39:47.575537 1834111 ssh_runner.go:195] Run: crio --version
	I0731 23:39:47.621512 1834111 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0731 23:39:47.623713 1834111 cli_runner.go:164] Run: docker network inspect embed-certs-442076 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 23:39:47.639488 1834111 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0731 23:39:47.643283 1834111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:39:47.654167 1834111 kubeadm.go:883] updating cluster {Name:embed-certs-442076 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-442076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 23:39:47.654292 1834111 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 23:39:47.654354 1834111 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:39:47.734294 1834111 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 23:39:47.734316 1834111 crio.go:433] Images already preloaded, skipping extraction
	I0731 23:39:47.734380 1834111 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:39:47.770411 1834111 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 23:39:47.770435 1834111 cache_images.go:84] Images are preloaded, skipping loading
	I0731 23:39:47.770443 1834111 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.30.3 crio true true} ...
	I0731 23:39:47.770540 1834111 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-442076 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-442076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:39:47.770623 1834111 ssh_runner.go:195] Run: crio config
	I0731 23:39:47.827409 1834111 cni.go:84] Creating CNI manager for ""
	I0731 23:39:47.827435 1834111 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 23:39:47.827444 1834111 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 23:39:47.827470 1834111 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-442076 NodeName:embed-certs-442076 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 23:39:47.827608 1834111 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-442076"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 23:39:47.827684 1834111 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 23:39:47.836812 1834111 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:39:47.836944 1834111 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 23:39:47.846032 1834111 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0731 23:39:47.868485 1834111 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:39:47.888018 1834111 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0731 23:39:47.907906 1834111 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0731 23:39:47.911441 1834111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:39:47.922431 1834111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:39:48.015540 1834111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:39:48.032506 1834111 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076 for IP: 192.168.85.2
	I0731 23:39:48.032531 1834111 certs.go:194] generating shared ca certs ...
	I0731 23:39:48.032548 1834111 certs.go:226] acquiring lock for ca certs: {Name:mk6ccdabf08b8b9bfa2ad4dfbceb108d85e42085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:39:48.032697 1834111 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key
	I0731 23:39:48.032743 1834111 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key
	I0731 23:39:48.032754 1834111 certs.go:256] generating profile certs ...
	I0731 23:39:48.032814 1834111 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/client.key
	I0731 23:39:48.032831 1834111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/client.crt with IP's: []
	I0731 23:39:48.361753 1834111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/client.crt ...
	I0731 23:39:48.361784 1834111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/client.crt: {Name:mk938f7bb94214cac36924a43920d97602976c07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:39:48.362451 1834111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/client.key ...
	I0731 23:39:48.362467 1834111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/client.key: {Name:mke169bbae62bb8f05100383dc1211c9a59c0fa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:39:48.362953 1834111 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.key.a9b1a3b7
	I0731 23:39:48.362969 1834111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.crt.a9b1a3b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0731 23:39:48.990133 1834111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.crt.a9b1a3b7 ...
	I0731 23:39:48.990166 1834111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.crt.a9b1a3b7: {Name:mk78cdb66dc52fb58ead4cb7fb2087f5aa51dbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:39:48.990927 1834111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.key.a9b1a3b7 ...
	I0731 23:39:48.990946 1834111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.key.a9b1a3b7: {Name:mk2a0132d79bb1f49e5ea21f1ad57933d434cbe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:39:48.991037 1834111 certs.go:381] copying /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.crt.a9b1a3b7 -> /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.crt
	I0731 23:39:48.991118 1834111 certs.go:385] copying /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.key.a9b1a3b7 -> /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.key
	I0731 23:39:48.991189 1834111 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/proxy-client.key
	I0731 23:39:48.991215 1834111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/proxy-client.crt with IP's: []
	I0731 23:39:49.307933 1834111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/proxy-client.crt ...
	I0731 23:39:49.307969 1834111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/proxy-client.crt: {Name:mkf8638ee681f202b320c4ca42be442e54bba924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:39:49.308193 1834111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/proxy-client.key ...
	I0731 23:39:49.308212 1834111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/proxy-client.key: {Name:mk5f7fd48dec2e1633535fb21dd4997032346d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:39:49.308943 1834111 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/1584615.pem (1338 bytes)
	W0731 23:39:49.308997 1834111 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/1584615_empty.pem, impossibly tiny 0 bytes
	I0731 23:39:49.309011 1834111 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 23:39:49.309038 1834111 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/ca.pem (1082 bytes)
	I0731 23:39:49.309066 1834111 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/cert.pem (1123 bytes)
	I0731 23:39:49.309093 1834111 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/key.pem (1679 bytes)
	I0731 23:39:49.309165 1834111 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1579223/.minikube/files/etc/ssl/certs/15846152.pem (1708 bytes)
	I0731 23:39:49.309783 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:39:49.338943 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 23:39:49.366092 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:39:49.391593 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 23:39:49.417278 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 23:39:49.442826 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 23:39:49.468182 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 23:39:49.494797 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/embed-certs-442076/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 23:39:49.521137 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/certs/1584615.pem --> /usr/share/ca-certificates/1584615.pem (1338 bytes)
	I0731 23:39:49.552824 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/files/etc/ssl/certs/15846152.pem --> /usr/share/ca-certificates/15846152.pem (1708 bytes)
	I0731 23:39:49.584096 1834111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:39:49.621716 1834111 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 23:39:49.646191 1834111 ssh_runner.go:195] Run: openssl version
	I0731 23:39:49.652697 1834111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584615.pem && ln -fs /usr/share/ca-certificates/1584615.pem /etc/ssl/certs/1584615.pem"
	I0731 23:39:49.664061 1834111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584615.pem
	I0731 23:39:49.668695 1834111 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:43 /usr/share/ca-certificates/1584615.pem
	I0731 23:39:49.668812 1834111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584615.pem
	I0731 23:39:49.677032 1834111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584615.pem /etc/ssl/certs/51391683.0"
	I0731 23:39:49.687881 1834111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15846152.pem && ln -fs /usr/share/ca-certificates/15846152.pem /etc/ssl/certs/15846152.pem"
	I0731 23:39:49.698378 1834111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15846152.pem
	I0731 23:39:49.703030 1834111 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:43 /usr/share/ca-certificates/15846152.pem
	I0731 23:39:49.703154 1834111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15846152.pem
	I0731 23:39:49.712460 1834111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15846152.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:39:49.723352 1834111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:39:49.734595 1834111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:39:49.738717 1834111 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:39:49.738843 1834111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:39:49.747267 1834111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:39:49.759099 1834111 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:39:49.762830 1834111 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 23:39:49.762929 1834111 kubeadm.go:392] StartCluster: {Name:embed-certs-442076 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-442076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:39:49.763051 1834111 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 23:39:49.763139 1834111 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 23:39:49.812413 1834111 cri.go:89] found id: ""
	I0731 23:39:49.812534 1834111 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 23:39:49.821717 1834111 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 23:39:49.831495 1834111 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0731 23:39:49.831597 1834111 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 23:39:49.841296 1834111 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:39:49.841319 1834111 kubeadm.go:157] found existing configuration files:
	
	I0731 23:39:49.841373 1834111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 23:39:49.851174 1834111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:39:49.851245 1834111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 23:39:49.860266 1834111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 23:39:49.869719 1834111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:39:49.869785 1834111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 23:39:49.880733 1834111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 23:39:49.890301 1834111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:39:49.890364 1834111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 23:39:49.899321 1834111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 23:39:49.909000 1834111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:39:49.909078 1834111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 23:39:49.918104 1834111 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 23:39:49.979677 1834111 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 23:39:49.979737 1834111 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 23:39:50.029429 1834111 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0731 23:39:50.029558 1834111 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-aws
	I0731 23:39:50.029621 1834111 kubeadm.go:310] OS: Linux
	I0731 23:39:50.029695 1834111 kubeadm.go:310] CGROUPS_CPU: enabled
	I0731 23:39:50.029774 1834111 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0731 23:39:50.029848 1834111 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0731 23:39:50.029925 1834111 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0731 23:39:50.030001 1834111 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0731 23:39:50.030078 1834111 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0731 23:39:50.030151 1834111 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0731 23:39:50.030241 1834111 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0731 23:39:50.030318 1834111 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0731 23:39:50.108301 1834111 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 23:39:50.108460 1834111 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 23:39:50.108596 1834111 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 23:39:50.396380 1834111 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 23:39:49.171344 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:51.670446 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:50.398840 1834111 out.go:204]   - Generating certificates and keys ...
	I0731 23:39:50.399005 1834111 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 23:39:50.399119 1834111 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 23:39:50.767108 1834111 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 23:39:51.116836 1834111 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 23:39:52.157666 1834111 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 23:39:52.330566 1834111 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 23:39:52.950611 1834111 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 23:39:52.950755 1834111 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-442076 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0731 23:39:53.512174 1834111 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 23:39:53.512506 1834111 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-442076 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0731 23:39:53.843085 1834111 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 23:39:54.013729 1834111 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 23:39:54.313417 1834111 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 23:39:54.313721 1834111 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 23:39:54.673426 1834111 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 23:39:55.196378 1834111 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 23:39:55.409894 1834111 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 23:39:56.136508 1834111 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 23:39:56.732801 1834111 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 23:39:56.733480 1834111 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 23:39:56.736403 1834111 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 23:39:53.672244 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:56.168225 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:39:56.744628 1834111 out.go:204]   - Booting up control plane ...
	I0731 23:39:56.744737 1834111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 23:39:56.744822 1834111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 23:39:56.744890 1834111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 23:39:56.749569 1834111 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 23:39:56.752077 1834111 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 23:39:56.752493 1834111 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 23:39:56.851134 1834111 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 23:39:56.851227 1834111 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 23:39:58.352145 1834111 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500804847s
	I0731 23:39:58.352233 1834111 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 23:39:58.670724 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:00.671990 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:04.854156 1834111 kubeadm.go:310] [api-check] The API server is healthy after 6.502195374s
	I0731 23:40:04.875327 1834111 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 23:40:04.890040 1834111 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 23:40:04.918298 1834111 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 23:40:04.918498 1834111 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-442076 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 23:40:04.929879 1834111 kubeadm.go:310] [bootstrap-token] Using token: mhkz47.gp5m54wbjtla0ioz
	I0731 23:40:04.931797 1834111 out.go:204]   - Configuring RBAC rules ...
	I0731 23:40:04.931926 1834111 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 23:40:04.937555 1834111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 23:40:04.945785 1834111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 23:40:04.949645 1834111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 23:40:04.953568 1834111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 23:40:04.959582 1834111 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 23:40:05.261144 1834111 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 23:40:05.721461 1834111 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 23:40:06.267708 1834111 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 23:40:06.267731 1834111 kubeadm.go:310] 
	I0731 23:40:06.267800 1834111 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 23:40:06.267806 1834111 kubeadm.go:310] 
	I0731 23:40:06.267880 1834111 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 23:40:06.267885 1834111 kubeadm.go:310] 
	I0731 23:40:06.267911 1834111 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 23:40:06.267967 1834111 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 23:40:06.268019 1834111 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 23:40:06.268023 1834111 kubeadm.go:310] 
	I0731 23:40:06.268075 1834111 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 23:40:06.268079 1834111 kubeadm.go:310] 
	I0731 23:40:06.268130 1834111 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 23:40:06.268151 1834111 kubeadm.go:310] 
	I0731 23:40:06.268207 1834111 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 23:40:06.268279 1834111 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 23:40:06.268350 1834111 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 23:40:06.268357 1834111 kubeadm.go:310] 
	I0731 23:40:06.268444 1834111 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 23:40:06.268517 1834111 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 23:40:06.268522 1834111 kubeadm.go:310] 
	I0731 23:40:06.268602 1834111 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mhkz47.gp5m54wbjtla0ioz \
	I0731 23:40:06.268705 1834111 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6aeac36715e45fc93d88e018ede78e85fe5c7ed540db2ea7e85e78caf89de8d9 \
	I0731 23:40:06.268725 1834111 kubeadm.go:310] 	--control-plane 
	I0731 23:40:06.268729 1834111 kubeadm.go:310] 
	I0731 23:40:06.268812 1834111 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 23:40:06.268817 1834111 kubeadm.go:310] 
	I0731 23:40:06.268895 1834111 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mhkz47.gp5m54wbjtla0ioz \
	I0731 23:40:06.268992 1834111 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6aeac36715e45fc93d88e018ede78e85fe5c7ed540db2ea7e85e78caf89de8d9 
	I0731 23:40:06.270050 1834111 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-aws\n", err: exit status 1
	I0731 23:40:06.270170 1834111 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:40:06.270197 1834111 cni.go:84] Creating CNI manager for ""
	I0731 23:40:06.270207 1834111 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 23:40:06.272263 1834111 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 23:40:03.169169 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:05.169709 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:06.274137 1834111 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 23:40:06.278246 1834111 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 23:40:06.278268 1834111 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0731 23:40:06.298633 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 23:40:06.586806 1834111 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 23:40:06.586930 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:06.587016 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-442076 minikube.k8s.io/updated_at=2024_07_31T23_40_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=embed-certs-442076 minikube.k8s.io/primary=true
	I0731 23:40:06.726480 1834111 ops.go:34] apiserver oom_adj: -16
	I0731 23:40:06.726583 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:07.227685 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:07.727196 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:08.227678 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:07.668238 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:09.669577 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:11.669953 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:08.727355 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:09.227282 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:09.726968 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:10.226779 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:10.726733 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:11.227486 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:11.727201 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:12.227157 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:12.727387 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:13.226722 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:14.168734 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:16.170059 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:13.727712 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:14.227461 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:14.727653 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:15.226692 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:15.727651 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:16.227619 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:16.727088 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:17.227457 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:17.726748 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:18.226964 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:18.727264 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:19.227013 1834111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:40:19.420176 1834111 kubeadm.go:1113] duration metric: took 12.833289402s to wait for elevateKubeSystemPrivileges
	I0731 23:40:19.420206 1834111 kubeadm.go:394] duration metric: took 29.657282433s to StartCluster
	I0731 23:40:19.420224 1834111 settings.go:142] acquiring lock: {Name:mk3c0c3b857f6d982767b7eb95481d3e4843baa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:40:19.420288 1834111 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 23:40:19.421718 1834111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/kubeconfig: {Name:mkfef6e38d1ebcc45fcbbe766a2ae2945f7bd392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:40:19.421942 1834111 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 23:40:19.422074 1834111 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 23:40:19.422325 1834111 config.go:182] Loaded profile config "embed-certs-442076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:40:19.422292 1834111 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 23:40:19.422408 1834111 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-442076"
	I0731 23:40:19.422432 1834111 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-442076"
	I0731 23:40:19.422457 1834111 host.go:66] Checking if "embed-certs-442076" exists ...
	I0731 23:40:19.422538 1834111 addons.go:69] Setting default-storageclass=true in profile "embed-certs-442076"
	I0731 23:40:19.422596 1834111 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-442076"
	I0731 23:40:19.422889 1834111 cli_runner.go:164] Run: docker container inspect embed-certs-442076 --format={{.State.Status}}
	I0731 23:40:19.422938 1834111 cli_runner.go:164] Run: docker container inspect embed-certs-442076 --format={{.State.Status}}
	I0731 23:40:19.425390 1834111 out.go:177] * Verifying Kubernetes components...
	I0731 23:40:19.433698 1834111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:40:19.465973 1834111 addons.go:234] Setting addon default-storageclass=true in "embed-certs-442076"
	I0731 23:40:19.466017 1834111 host.go:66] Checking if "embed-certs-442076" exists ...
	I0731 23:40:19.466255 1834111 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:40:19.466500 1834111 cli_runner.go:164] Run: docker container inspect embed-certs-442076 --format={{.State.Status}}
	I0731 23:40:19.469290 1834111 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:40:19.469314 1834111 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 23:40:19.469375 1834111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-442076
	I0731 23:40:19.529581 1834111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34976 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/embed-certs-442076/id_rsa Username:docker}
	I0731 23:40:19.531466 1834111 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 23:40:19.531486 1834111 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 23:40:19.531547 1834111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-442076
	I0731 23:40:19.560725 1834111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34976 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/embed-certs-442076/id_rsa Username:docker}
	I0731 23:40:19.711273 1834111 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 23:40:19.711368 1834111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:40:19.717687 1834111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:40:19.781889 1834111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 23:40:20.300237 1834111 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0731 23:40:20.301438 1834111 node_ready.go:35] waiting up to 6m0s for node "embed-certs-442076" to be "Ready" ...
	I0731 23:40:20.532371 1834111 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 23:40:18.668036 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:20.669542 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:20.534250 1834111 addons.go:510] duration metric: took 1.111951927s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 23:40:20.806703 1834111 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-442076" context rescaled to 1 replicas
	I0731 23:40:22.304469 1834111 node_ready.go:53] node "embed-certs-442076" has status "Ready":"False"
	I0731 23:40:23.168832 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:25.169914 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:24.306224 1834111 node_ready.go:53] node "embed-certs-442076" has status "Ready":"False"
	I0731 23:40:26.804638 1834111 node_ready.go:53] node "embed-certs-442076" has status "Ready":"False"
	I0731 23:40:27.667676 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:29.668497 1829252 pod_ready.go:102] pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace has status "Ready":"False"
	I0731 23:40:31.168677 1829252 pod_ready.go:81] duration metric: took 4m0.006492497s for pod "metrics-server-9975d5f86-w25p2" in "kube-system" namespace to be "Ready" ...
	E0731 23:40:31.168708 1829252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 23:40:31.168720 1829252 pod_ready.go:38] duration metric: took 5m23.160205049s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:40:31.168736 1829252 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:40:31.168766 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 23:40:31.168832 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 23:40:31.210665 1829252 cri.go:89] found id: "46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416"
	I0731 23:40:31.210687 1829252 cri.go:89] found id: ""
	I0731 23:40:31.210695 1829252 logs.go:276] 1 containers: [46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416]
	I0731 23:40:31.210762 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.214275 1829252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 23:40:31.214347 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 23:40:31.254998 1829252 cri.go:89] found id: "6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c"
	I0731 23:40:31.255072 1829252 cri.go:89] found id: ""
	I0731 23:40:31.255096 1829252 logs.go:276] 1 containers: [6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c]
	I0731 23:40:31.255183 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.259162 1829252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 23:40:31.259290 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 23:40:31.306298 1829252 cri.go:89] found id: "de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b"
	I0731 23:40:31.306366 1829252 cri.go:89] found id: ""
	I0731 23:40:31.306386 1829252 logs.go:276] 1 containers: [de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b]
	I0731 23:40:31.306468 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.310127 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 23:40:31.310198 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 23:40:31.350673 1829252 cri.go:89] found id: "9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548"
	I0731 23:40:31.350738 1829252 cri.go:89] found id: ""
	I0731 23:40:31.350753 1829252 logs.go:276] 1 containers: [9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548]
	I0731 23:40:31.350819 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.354414 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 23:40:31.354500 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 23:40:31.396679 1829252 cri.go:89] found id: "c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796"
	I0731 23:40:31.396705 1829252 cri.go:89] found id: ""
	I0731 23:40:31.396712 1829252 logs.go:276] 1 containers: [c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796]
	I0731 23:40:31.396776 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.400231 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 23:40:31.400307 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 23:40:31.453366 1829252 cri.go:89] found id: "6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00"
	I0731 23:40:31.453390 1829252 cri.go:89] found id: ""
	I0731 23:40:31.453398 1829252 logs.go:276] 1 containers: [6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00]
	I0731 23:40:31.453454 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.457018 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 23:40:31.457089 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 23:40:31.495600 1829252 cri.go:89] found id: "bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7"
	I0731 23:40:31.495620 1829252 cri.go:89] found id: ""
	I0731 23:40:31.495628 1829252 logs.go:276] 1 containers: [bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7]
	I0731 23:40:31.495689 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.499263 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 23:40:31.499355 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 23:40:31.538992 1829252 cri.go:89] found id: "d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27"
	I0731 23:40:31.539015 1829252 cri.go:89] found id: ""
	I0731 23:40:31.539023 1829252 logs.go:276] 1 containers: [d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27]
	I0731 23:40:31.539099 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.542861 1829252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 23:40:31.542996 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 23:40:31.594113 1829252 cri.go:89] found id: "af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92"
	I0731 23:40:31.594172 1829252 cri.go:89] found id: "e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1"
	I0731 23:40:31.594192 1829252 cri.go:89] found id: ""
	I0731 23:40:31.594219 1829252 logs.go:276] 2 containers: [af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92 e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1]
	I0731 23:40:31.594308 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.598495 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:31.601970 1829252 logs.go:123] Gathering logs for dmesg ...
	I0731 23:40:31.601993 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 23:40:31.622722 1829252 logs.go:123] Gathering logs for etcd [6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c] ...
	I0731 23:40:31.622751 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c"
	I0731 23:40:31.669324 1829252 logs.go:123] Gathering logs for kube-proxy [c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796] ...
	I0731 23:40:31.669358 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796"
	I0731 23:40:31.709562 1829252 logs.go:123] Gathering logs for kindnet [bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7] ...
	I0731 23:40:31.709593 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7"
	I0731 23:40:31.760136 1829252 logs.go:123] Gathering logs for storage-provisioner [af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92] ...
	I0731 23:40:31.760180 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92"
	I0731 23:40:31.800595 1829252 logs.go:123] Gathering logs for container status ...
	I0731 23:40:31.800673 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 23:40:31.852654 1829252 logs.go:123] Gathering logs for kubelet ...
	I0731 23:40:31.852732 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 23:40:31.905287 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593291     742 reflector.go:138] object-"kube-system"/"kindnet-token-crzsj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-crzsj" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.905584 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593424     742 reflector.go:138] object-"kube-system"/"storage-provisioner-token-d22vf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-d22vf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.905826 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593481     742 reflector.go:138] object-"kube-system"/"metrics-server-token-kr52c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-kr52c" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.906052 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593534     742 reflector.go:138] object-"kube-system"/"kube-proxy-token-fzgtf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-fzgtf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.906267 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593592     742 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.906486 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593646     742 reflector.go:138] object-"default"/"default-token-4prkn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-4prkn" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.906695 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593719     742 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.906917 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593768     742 reflector.go:138] object-"kube-system"/"coredns-token-6sgwf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-6sgwf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:31.917757 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:09 old-k8s-version-130660 kubelet[742]: E0731 23:35:09.731660     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:31.918380 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:09 old-k8s-version-130660 kubelet[742]: E0731 23:35:09.980822     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.922205 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:23 old-k8s-version-130660 kubelet[742]: E0731 23:35:23.912962     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:31.923741 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:37 old-k8s-version-130660 kubelet[742]: E0731 23:35:37.900150     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.924088 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:38 old-k8s-version-130660 kubelet[742]: E0731 23:35:38.139768     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.924573 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:39 old-k8s-version-130660 kubelet[742]: E0731 23:35:39.142583     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.925055 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:42 old-k8s-version-130660 kubelet[742]: E0731 23:35:42.939292     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.927208 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:51 old-k8s-version-130660 kubelet[742]: E0731 23:35:51.915245     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:31.927833 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:55 old-k8s-version-130660 kubelet[742]: E0731 23:35:55.174079     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.928027 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:02 old-k8s-version-130660 kubelet[742]: E0731 23:36:02.900602     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.928415 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:02 old-k8s-version-130660 kubelet[742]: E0731 23:36:02.938961     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.928612 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:14 old-k8s-version-130660 kubelet[742]: E0731 23:36:14.900659     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.929257 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:16 old-k8s-version-130660 kubelet[742]: E0731 23:36:16.237810     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.929603 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:22 old-k8s-version-130660 kubelet[742]: E0731 23:36:22.938992     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.929795 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:25 old-k8s-version-130660 kubelet[742]: E0731 23:36:25.900141     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.930137 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:37 old-k8s-version-130660 kubelet[742]: E0731 23:36:37.900048     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.932288 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:37 old-k8s-version-130660 kubelet[742]: E0731 23:36:37.913581     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:31.932483 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:48 old-k8s-version-130660 kubelet[742]: E0731 23:36:48.900800     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.932831 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:50 old-k8s-version-130660 kubelet[742]: E0731 23:36:50.899714     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.933026 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:02 old-k8s-version-130660 kubelet[742]: E0731 23:37:02.901296     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.933664 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:06 old-k8s-version-130660 kubelet[742]: E0731 23:37:06.310165     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.934010 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:12 old-k8s-version-130660 kubelet[742]: E0731 23:37:12.939044     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.934204 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:14 old-k8s-version-130660 kubelet[742]: E0731 23:37:14.900812     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.934549 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:27 old-k8s-version-130660 kubelet[742]: E0731 23:37:27.899533     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.934743 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:28 old-k8s-version-130660 kubelet[742]: E0731 23:37:28.900944     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.935086 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:41 old-k8s-version-130660 kubelet[742]: E0731 23:37:41.899714     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.935276 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:42 old-k8s-version-130660 kubelet[742]: E0731 23:37:42.900065     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.935465 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:54 old-k8s-version-130660 kubelet[742]: E0731 23:37:54.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.935812 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:55 old-k8s-version-130660 kubelet[742]: E0731 23:37:55.899783     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.938272 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:05 old-k8s-version-130660 kubelet[742]: E0731 23:38:05.912798     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:31.939974 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:10 old-k8s-version-130660 kubelet[742]: E0731 23:38:10.899721     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.940195 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:18 old-k8s-version-130660 kubelet[742]: E0731 23:38:18.901024     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.940543 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:21 old-k8s-version-130660 kubelet[742]: E0731 23:38:21.899662     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.940734 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:33 old-k8s-version-130660 kubelet[742]: E0731 23:38:33.900179     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.941390 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:36 old-k8s-version-130660 kubelet[742]: E0731 23:38:36.433618     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.941754 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:42 old-k8s-version-130660 kubelet[742]: E0731 23:38:42.939018     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.941951 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:46 old-k8s-version-130660 kubelet[742]: E0731 23:38:46.900616     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.942315 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:54 old-k8s-version-130660 kubelet[742]: E0731 23:38:54.899746     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.942519 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:01 old-k8s-version-130660 kubelet[742]: E0731 23:39:01.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.942862 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:08 old-k8s-version-130660 kubelet[742]: E0731 23:39:08.899966     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.943053 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:14 old-k8s-version-130660 kubelet[742]: E0731 23:39:14.900213     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.943394 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:20 old-k8s-version-130660 kubelet[742]: E0731 23:39:20.899847     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.943585 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:25 old-k8s-version-130660 kubelet[742]: E0731 23:39:25.900329     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.943933 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:32 old-k8s-version-130660 kubelet[742]: E0731 23:39:32.900100     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.944131 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:36 old-k8s-version-130660 kubelet[742]: E0731 23:39:36.900554     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.944482 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:46 old-k8s-version-130660 kubelet[742]: E0731 23:39:46.899659     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.944678 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:47 old-k8s-version-130660 kubelet[742]: E0731 23:39:47.900629     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.945022 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:57 old-k8s-version-130660 kubelet[742]: E0731 23:39:57.899835     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.945479 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:02 old-k8s-version-130660 kubelet[742]: E0731 23:40:02.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.945825 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:11 old-k8s-version-130660 kubelet[742]: E0731 23:40:11.899717     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.946044 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:16 old-k8s-version-130660 kubelet[742]: E0731 23:40:16.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:31.946393 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: E0731 23:40:24.899764     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:31.946587 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:27 old-k8s-version-130660 kubelet[742]: E0731 23:40:27.900206     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0731 23:40:31.946597 1829252 logs.go:123] Gathering logs for kube-scheduler [9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548] ...
	I0731 23:40:31.946613 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548"
	I0731 23:40:31.994035 1829252 logs.go:123] Gathering logs for describe nodes ...
	I0731 23:40:31.994065 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 23:40:32.158567 1829252 logs.go:123] Gathering logs for kube-apiserver [46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416] ...
	I0731 23:40:32.158602 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416"
	I0731 23:40:32.227818 1829252 logs.go:123] Gathering logs for coredns [de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b] ...
	I0731 23:40:32.227860 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b"
	I0731 23:40:32.273447 1829252 logs.go:123] Gathering logs for kube-controller-manager [6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00] ...
	I0731 23:40:32.273478 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00"
	I0731 23:40:32.344093 1829252 logs.go:123] Gathering logs for kubernetes-dashboard [d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27] ...
	I0731 23:40:32.344134 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27"
	I0731 23:40:32.385695 1829252 logs.go:123] Gathering logs for storage-provisioner [e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1] ...
	I0731 23:40:32.385724 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1"
	I0731 23:40:32.435975 1829252 logs.go:123] Gathering logs for CRI-O ...
	I0731 23:40:32.436001 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 23:40:32.522296 1829252 out.go:304] Setting ErrFile to fd 2...
	I0731 23:40:32.522372 1829252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 23:40:32.522460 1829252 out.go:239] X Problems detected in kubelet:
	W0731 23:40:32.522500 1829252 out.go:239]   Jul 31 23:40:02 old-k8s-version-130660 kubelet[742]: E0731 23:40:02.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:32.522532 1829252 out.go:239]   Jul 31 23:40:11 old-k8s-version-130660 kubelet[742]: E0731 23:40:11.899717     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:32.522583 1829252 out.go:239]   Jul 31 23:40:16 old-k8s-version-130660 kubelet[742]: E0731 23:40:16.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:32.522613 1829252 out.go:239]   Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: E0731 23:40:24.899764     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:32.522668 1829252 out.go:239]   Jul 31 23:40:27 old-k8s-version-130660 kubelet[742]: E0731 23:40:27.900206     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0731 23:40:32.522702 1829252 out.go:304] Setting ErrFile to fd 2...
	I0731 23:40:32.522720 1829252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:40:29.305131 1834111 node_ready.go:53] node "embed-certs-442076" has status "Ready":"False"
	I0731 23:40:31.805944 1834111 node_ready.go:53] node "embed-certs-442076" has status "Ready":"False"
	I0731 23:40:32.805330 1834111 node_ready.go:49] node "embed-certs-442076" has status "Ready":"True"
	I0731 23:40:32.805357 1834111 node_ready.go:38] duration metric: took 12.503858647s for node "embed-certs-442076" to be "Ready" ...
	I0731 23:40:32.805368 1834111 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:40:32.812915 1834111 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5dqg7" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:33.819296 1834111 pod_ready.go:92] pod "coredns-7db6d8ff4d-5dqg7" in "kube-system" namespace has status "Ready":"True"
	I0731 23:40:33.819361 1834111 pod_ready.go:81] duration metric: took 1.006413591s for pod "coredns-7db6d8ff4d-5dqg7" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:33.819381 1834111 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-442076" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:33.824885 1834111 pod_ready.go:92] pod "etcd-embed-certs-442076" in "kube-system" namespace has status "Ready":"True"
	I0731 23:40:33.824914 1834111 pod_ready.go:81] duration metric: took 5.524895ms for pod "etcd-embed-certs-442076" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:33.824930 1834111 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-442076" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:33.830637 1834111 pod_ready.go:92] pod "kube-apiserver-embed-certs-442076" in "kube-system" namespace has status "Ready":"True"
	I0731 23:40:33.830664 1834111 pod_ready.go:81] duration metric: took 5.72523ms for pod "kube-apiserver-embed-certs-442076" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:33.830676 1834111 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-442076" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:33.835985 1834111 pod_ready.go:92] pod "kube-controller-manager-embed-certs-442076" in "kube-system" namespace has status "Ready":"True"
	I0731 23:40:33.836012 1834111 pod_ready.go:81] duration metric: took 5.328227ms for pod "kube-controller-manager-embed-certs-442076" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:33.836024 1834111 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-65tsd" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:34.006590 1834111 pod_ready.go:92] pod "kube-proxy-65tsd" in "kube-system" namespace has status "Ready":"True"
	I0731 23:40:34.006624 1834111 pod_ready.go:81] duration metric: took 170.572756ms for pod "kube-proxy-65tsd" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:34.006639 1834111 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-442076" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:34.406068 1834111 pod_ready.go:92] pod "kube-scheduler-embed-certs-442076" in "kube-system" namespace has status "Ready":"True"
	I0731 23:40:34.406094 1834111 pod_ready.go:81] duration metric: took 399.446918ms for pod "kube-scheduler-embed-certs-442076" in "kube-system" namespace to be "Ready" ...
	I0731 23:40:34.406107 1834111 pod_ready.go:38] duration metric: took 1.600699153s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:40:34.406141 1834111 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:40:34.406220 1834111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:40:34.417712 1834111 api_server.go:72] duration metric: took 14.995740445s to wait for apiserver process to appear ...
	I0731 23:40:34.417781 1834111 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:40:34.417807 1834111 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0731 23:40:34.426374 1834111 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0731 23:40:34.427349 1834111 api_server.go:141] control plane version: v1.30.3
	I0731 23:40:34.427374 1834111 api_server.go:131] duration metric: took 9.581783ms to wait for apiserver health ...
	I0731 23:40:34.427382 1834111 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 23:40:34.608528 1834111 system_pods.go:59] 8 kube-system pods found
	I0731 23:40:34.608563 1834111 system_pods.go:61] "coredns-7db6d8ff4d-5dqg7" [b5e9d8a3-ab26-467a-8a95-24f8b88652ad] Running
	I0731 23:40:34.608569 1834111 system_pods.go:61] "etcd-embed-certs-442076" [f1ffacd1-c460-4211-815d-3a5402822164] Running
	I0731 23:40:34.608574 1834111 system_pods.go:61] "kindnet-2qv9v" [5d313ae4-ab0a-477e-b68a-c8f8fdd30ca8] Running
	I0731 23:40:34.608579 1834111 system_pods.go:61] "kube-apiserver-embed-certs-442076" [cde12b03-952c-4947-95a1-bb2902b3da2e] Running
	I0731 23:40:34.608584 1834111 system_pods.go:61] "kube-controller-manager-embed-certs-442076" [7937b520-cbfd-4ba8-a67a-90ed3600296c] Running
	I0731 23:40:34.608589 1834111 system_pods.go:61] "kube-proxy-65tsd" [45c66915-c96b-487b-9f5c-cd29cf7522b9] Running
	I0731 23:40:34.608593 1834111 system_pods.go:61] "kube-scheduler-embed-certs-442076" [1133e84e-5a70-445e-b8e0-28fafcedb28e] Running
	I0731 23:40:34.608597 1834111 system_pods.go:61] "storage-provisioner" [e54f8cba-d6ab-44c0-b886-e78a5c6d8359] Running
	I0731 23:40:34.608605 1834111 system_pods.go:74] duration metric: took 181.215786ms to wait for pod list to return data ...
	I0731 23:40:34.608617 1834111 default_sa.go:34] waiting for default service account to be created ...
	I0731 23:40:34.805589 1834111 default_sa.go:45] found service account: "default"
	I0731 23:40:34.805615 1834111 default_sa.go:55] duration metric: took 196.987461ms for default service account to be created ...
	I0731 23:40:34.805627 1834111 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 23:40:35.015695 1834111 system_pods.go:86] 8 kube-system pods found
	I0731 23:40:35.015725 1834111 system_pods.go:89] "coredns-7db6d8ff4d-5dqg7" [b5e9d8a3-ab26-467a-8a95-24f8b88652ad] Running
	I0731 23:40:35.015732 1834111 system_pods.go:89] "etcd-embed-certs-442076" [f1ffacd1-c460-4211-815d-3a5402822164] Running
	I0731 23:40:35.015737 1834111 system_pods.go:89] "kindnet-2qv9v" [5d313ae4-ab0a-477e-b68a-c8f8fdd30ca8] Running
	I0731 23:40:35.015741 1834111 system_pods.go:89] "kube-apiserver-embed-certs-442076" [cde12b03-952c-4947-95a1-bb2902b3da2e] Running
	I0731 23:40:35.015747 1834111 system_pods.go:89] "kube-controller-manager-embed-certs-442076" [7937b520-cbfd-4ba8-a67a-90ed3600296c] Running
	I0731 23:40:35.015751 1834111 system_pods.go:89] "kube-proxy-65tsd" [45c66915-c96b-487b-9f5c-cd29cf7522b9] Running
	I0731 23:40:35.016837 1834111 system_pods.go:89] "kube-scheduler-embed-certs-442076" [1133e84e-5a70-445e-b8e0-28fafcedb28e] Running
	I0731 23:40:35.016864 1834111 system_pods.go:89] "storage-provisioner" [e54f8cba-d6ab-44c0-b886-e78a5c6d8359] Running
	I0731 23:40:35.016875 1834111 system_pods.go:126] duration metric: took 211.241037ms to wait for k8s-apps to be running ...
	I0731 23:40:35.016884 1834111 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 23:40:35.016954 1834111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:40:35.030257 1834111 system_svc.go:56] duration metric: took 13.362708ms WaitForService to wait for kubelet
	I0731 23:40:35.030288 1834111 kubeadm.go:582] duration metric: took 15.608320841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:40:35.030309 1834111 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:40:35.205649 1834111 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 23:40:35.205683 1834111 node_conditions.go:123] node cpu capacity is 2
	I0731 23:40:35.205694 1834111 node_conditions.go:105] duration metric: took 175.380533ms to run NodePressure ...
	I0731 23:40:35.205707 1834111 start.go:241] waiting for startup goroutines ...
	I0731 23:40:35.205714 1834111 start.go:246] waiting for cluster config update ...
	I0731 23:40:35.205725 1834111 start.go:255] writing updated cluster config ...
	I0731 23:40:35.206000 1834111 ssh_runner.go:195] Run: rm -f paused
	I0731 23:40:35.262744 1834111 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 23:40:35.265019 1834111 out.go:177] * Done! kubectl is now configured to use "embed-certs-442076" cluster and "default" namespace by default
	I0731 23:40:42.523167 1829252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:40:42.534963 1829252 api_server.go:72] duration metric: took 5m51.519646281s to wait for apiserver process to appear ...
	I0731 23:40:42.534988 1829252 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:40:42.535022 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 23:40:42.535080 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 23:40:42.573475 1829252 cri.go:89] found id: "46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416"
	I0731 23:40:42.573498 1829252 cri.go:89] found id: ""
	I0731 23:40:42.573507 1829252 logs.go:276] 1 containers: [46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416]
	I0731 23:40:42.573565 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.577141 1829252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 23:40:42.577216 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 23:40:42.618294 1829252 cri.go:89] found id: "6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c"
	I0731 23:40:42.618317 1829252 cri.go:89] found id: ""
	I0731 23:40:42.618325 1829252 logs.go:276] 1 containers: [6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c]
	I0731 23:40:42.618380 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.622074 1829252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 23:40:42.622146 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 23:40:42.661456 1829252 cri.go:89] found id: "de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b"
	I0731 23:40:42.661478 1829252 cri.go:89] found id: ""
	I0731 23:40:42.661486 1829252 logs.go:276] 1 containers: [de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b]
	I0731 23:40:42.661547 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.665157 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 23:40:42.665221 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 23:40:42.701128 1829252 cri.go:89] found id: "9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548"
	I0731 23:40:42.701152 1829252 cri.go:89] found id: ""
	I0731 23:40:42.701160 1829252 logs.go:276] 1 containers: [9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548]
	I0731 23:40:42.701216 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.704806 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 23:40:42.704875 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 23:40:42.741987 1829252 cri.go:89] found id: "c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796"
	I0731 23:40:42.742008 1829252 cri.go:89] found id: ""
	I0731 23:40:42.742017 1829252 logs.go:276] 1 containers: [c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796]
	I0731 23:40:42.742095 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.745798 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 23:40:42.745867 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 23:40:42.789352 1829252 cri.go:89] found id: "6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00"
	I0731 23:40:42.789377 1829252 cri.go:89] found id: ""
	I0731 23:40:42.789384 1829252 logs.go:276] 1 containers: [6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00]
	I0731 23:40:42.789443 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.793051 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 23:40:42.793147 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 23:40:42.830015 1829252 cri.go:89] found id: "bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7"
	I0731 23:40:42.830034 1829252 cri.go:89] found id: ""
	I0731 23:40:42.830042 1829252 logs.go:276] 1 containers: [bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7]
	I0731 23:40:42.830096 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.833643 1829252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 23:40:42.833716 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 23:40:42.873447 1829252 cri.go:89] found id: "af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92"
	I0731 23:40:42.873469 1829252 cri.go:89] found id: "e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1"
	I0731 23:40:42.873474 1829252 cri.go:89] found id: ""
	I0731 23:40:42.873481 1829252 logs.go:276] 2 containers: [af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92 e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1]
	I0731 23:40:42.873535 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.877038 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.880423 1829252 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 23:40:42.880533 1829252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 23:40:42.919417 1829252 cri.go:89] found id: "d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27"
	I0731 23:40:42.919441 1829252 cri.go:89] found id: ""
	I0731 23:40:42.919448 1829252 logs.go:276] 1 containers: [d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27]
	I0731 23:40:42.919510 1829252 ssh_runner.go:195] Run: which crictl
	I0731 23:40:42.923555 1829252 logs.go:123] Gathering logs for dmesg ...
	I0731 23:40:42.923587 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 23:40:42.942311 1829252 logs.go:123] Gathering logs for kube-controller-manager [6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00] ...
	I0731 23:40:42.942342 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00"
	I0731 23:40:43.024646 1829252 logs.go:123] Gathering logs for kindnet [bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7] ...
	I0731 23:40:43.024684 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7"
	I0731 23:40:43.086681 1829252 logs.go:123] Gathering logs for CRI-O ...
	I0731 23:40:43.086711 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 23:40:43.174577 1829252 logs.go:123] Gathering logs for container status ...
	I0731 23:40:43.174613 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 23:40:43.216374 1829252 logs.go:123] Gathering logs for kube-apiserver [46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416] ...
	I0731 23:40:43.216401 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416"
	I0731 23:40:43.291210 1829252 logs.go:123] Gathering logs for storage-provisioner [af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92] ...
	I0731 23:40:43.291247 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92"
	I0731 23:40:43.333618 1829252 logs.go:123] Gathering logs for storage-provisioner [e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1] ...
	I0731 23:40:43.333647 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1"
	I0731 23:40:43.373657 1829252 logs.go:123] Gathering logs for kubernetes-dashboard [d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27] ...
	I0731 23:40:43.373686 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27"
	I0731 23:40:43.418607 1829252 logs.go:123] Gathering logs for etcd [6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c] ...
	I0731 23:40:43.418636 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c"
	I0731 23:40:43.484482 1829252 logs.go:123] Gathering logs for kube-scheduler [9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548] ...
	I0731 23:40:43.484516 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548"
	I0731 23:40:43.544591 1829252 logs.go:123] Gathering logs for kube-proxy [c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796] ...
	I0731 23:40:43.544623 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796"
	I0731 23:40:43.622041 1829252 logs.go:123] Gathering logs for kubelet ...
	I0731 23:40:43.622069 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 23:40:43.694432 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593291     742 reflector.go:138] object-"kube-system"/"kindnet-token-crzsj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-crzsj" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.694779 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593424     742 reflector.go:138] object-"kube-system"/"storage-provisioner-token-d22vf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-d22vf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.695038 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593481     742 reflector.go:138] object-"kube-system"/"metrics-server-token-kr52c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-kr52c" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.695680 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593534     742 reflector.go:138] object-"kube-system"/"kube-proxy-token-fzgtf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-fzgtf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.695993 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593592     742 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.697348 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593646     742 reflector.go:138] object-"default"/"default-token-4prkn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-4prkn" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.697624 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593719     742 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.697848 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:07 old-k8s-version-130660 kubelet[742]: E0731 23:35:07.593768     742 reflector.go:138] object-"kube-system"/"coredns-token-6sgwf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-6sgwf" is forbidden: User "system:node:old-k8s-version-130660" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130660' and this object
	W0731 23:40:43.710124 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:09 old-k8s-version-130660 kubelet[742]: E0731 23:35:09.731660     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:43.710762 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:09 old-k8s-version-130660 kubelet[742]: E0731 23:35:09.980822     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.714635 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:23 old-k8s-version-130660 kubelet[742]: E0731 23:35:23.912962     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:43.716251 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:37 old-k8s-version-130660 kubelet[742]: E0731 23:35:37.900150     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.716594 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:38 old-k8s-version-130660 kubelet[742]: E0731 23:35:38.139768     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.717077 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:39 old-k8s-version-130660 kubelet[742]: E0731 23:35:39.142583     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.717609 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:42 old-k8s-version-130660 kubelet[742]: E0731 23:35:42.939292     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.719764 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:51 old-k8s-version-130660 kubelet[742]: E0731 23:35:51.915245     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:43.720376 1829252 logs.go:138] Found kubelet problem: Jul 31 23:35:55 old-k8s-version-130660 kubelet[742]: E0731 23:35:55.174079     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.720568 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:02 old-k8s-version-130660 kubelet[742]: E0731 23:36:02.900602     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.720909 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:02 old-k8s-version-130660 kubelet[742]: E0731 23:36:02.938961     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.721129 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:14 old-k8s-version-130660 kubelet[742]: E0731 23:36:14.900659     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.721793 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:16 old-k8s-version-130660 kubelet[742]: E0731 23:36:16.237810     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.722238 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:22 old-k8s-version-130660 kubelet[742]: E0731 23:36:22.938992     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.722440 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:25 old-k8s-version-130660 kubelet[742]: E0731 23:36:25.900141     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.722784 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:37 old-k8s-version-130660 kubelet[742]: E0731 23:36:37.900048     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.726823 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:37 old-k8s-version-130660 kubelet[742]: E0731 23:36:37.913581     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:43.727038 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:48 old-k8s-version-130660 kubelet[742]: E0731 23:36:48.900800     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.727383 1829252 logs.go:138] Found kubelet problem: Jul 31 23:36:50 old-k8s-version-130660 kubelet[742]: E0731 23:36:50.899714     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.727574 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:02 old-k8s-version-130660 kubelet[742]: E0731 23:37:02.901296     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.728202 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:06 old-k8s-version-130660 kubelet[742]: E0731 23:37:06.310165     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.728542 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:12 old-k8s-version-130660 kubelet[742]: E0731 23:37:12.939044     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.728737 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:14 old-k8s-version-130660 kubelet[742]: E0731 23:37:14.900812     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.729312 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:27 old-k8s-version-130660 kubelet[742]: E0731 23:37:27.899533     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.729512 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:28 old-k8s-version-130660 kubelet[742]: E0731 23:37:28.900944     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.729863 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:41 old-k8s-version-130660 kubelet[742]: E0731 23:37:41.899714     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.730056 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:42 old-k8s-version-130660 kubelet[742]: E0731 23:37:42.900065     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.730313 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:54 old-k8s-version-130660 kubelet[742]: E0731 23:37:54.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.730658 1829252 logs.go:138] Found kubelet problem: Jul 31 23:37:55 old-k8s-version-130660 kubelet[742]: E0731 23:37:55.899783     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.732864 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:05 old-k8s-version-130660 kubelet[742]: E0731 23:38:05.912798     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0731 23:40:43.733822 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:10 old-k8s-version-130660 kubelet[742]: E0731 23:38:10.899721     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.734056 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:18 old-k8s-version-130660 kubelet[742]: E0731 23:38:18.901024     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.734397 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:21 old-k8s-version-130660 kubelet[742]: E0731 23:38:21.899662     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.734588 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:33 old-k8s-version-130660 kubelet[742]: E0731 23:38:33.900179     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.735701 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:36 old-k8s-version-130660 kubelet[742]: E0731 23:38:36.433618     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.736069 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:42 old-k8s-version-130660 kubelet[742]: E0731 23:38:42.939018     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.736263 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:46 old-k8s-version-130660 kubelet[742]: E0731 23:38:46.900616     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.736604 1829252 logs.go:138] Found kubelet problem: Jul 31 23:38:54 old-k8s-version-130660 kubelet[742]: E0731 23:38:54.899746     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.736845 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:01 old-k8s-version-130660 kubelet[742]: E0731 23:39:01.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.737255 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:08 old-k8s-version-130660 kubelet[742]: E0731 23:39:08.899966     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.737462 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:14 old-k8s-version-130660 kubelet[742]: E0731 23:39:14.900213     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.737829 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:20 old-k8s-version-130660 kubelet[742]: E0731 23:39:20.899847     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.738043 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:25 old-k8s-version-130660 kubelet[742]: E0731 23:39:25.900329     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.738420 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:32 old-k8s-version-130660 kubelet[742]: E0731 23:39:32.900100     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.738641 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:36 old-k8s-version-130660 kubelet[742]: E0731 23:39:36.900554     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.739002 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:46 old-k8s-version-130660 kubelet[742]: E0731 23:39:46.899659     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.739207 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:47 old-k8s-version-130660 kubelet[742]: E0731 23:39:47.900629     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.739595 1829252 logs.go:138] Found kubelet problem: Jul 31 23:39:57 old-k8s-version-130660 kubelet[742]: E0731 23:39:57.899835     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.740089 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:02 old-k8s-version-130660 kubelet[742]: E0731 23:40:02.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.740457 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:11 old-k8s-version-130660 kubelet[742]: E0731 23:40:11.899717     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.740670 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:16 old-k8s-version-130660 kubelet[742]: E0731 23:40:16.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.741031 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: E0731 23:40:24.899764     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.741253 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:27 old-k8s-version-130660 kubelet[742]: E0731 23:40:27.900206     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.741628 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:37 old-k8s-version-130660 kubelet[742]: E0731 23:40:37.900394     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.741830 1829252 logs.go:138] Found kubelet problem: Jul 31 23:40:38 old-k8s-version-130660 kubelet[742]: E0731 23:40:38.900915     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0731 23:40:43.741846 1829252 logs.go:123] Gathering logs for describe nodes ...
	I0731 23:40:43.741863 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 23:40:43.924872 1829252 logs.go:123] Gathering logs for coredns [de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b] ...
	I0731 23:40:43.924913 1829252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b"
	I0731 23:40:43.993229 1829252 out.go:304] Setting ErrFile to fd 2...
	I0731 23:40:43.993305 1829252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 23:40:43.993387 1829252 out.go:239] X Problems detected in kubelet:
	W0731 23:40:43.993430 1829252 out.go:239]   Jul 31 23:40:16 old-k8s-version-130660 kubelet[742]: E0731 23:40:16.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.993462 1829252 out.go:239]   Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: E0731 23:40:24.899764     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.993512 1829252 out.go:239]   Jul 31 23:40:27 old-k8s-version-130660 kubelet[742]: E0731 23:40:27.900206     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0731 23:40:43.993545 1829252 out.go:239]   Jul 31 23:40:37 old-k8s-version-130660 kubelet[742]: E0731 23:40:37.900394     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	W0731 23:40:43.993586 1829252 out.go:239]   Jul 31 23:40:38 old-k8s-version-130660 kubelet[742]: E0731 23:40:38.900915     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0731 23:40:43.993620 1829252 out.go:304] Setting ErrFile to fd 2...
	I0731 23:40:43.993640 1829252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:40:53.995026 1829252 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0731 23:40:54.009309 1829252 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0731 23:40:54.011797 1829252 out.go:177] 
	W0731 23:40:54.013632 1829252 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0731 23:40:54.013674 1829252 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0731 23:40:54.013694 1829252 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0731 23:40:54.013700 1829252 out.go:239] * 
	W0731 23:40:54.014777 1829252 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 23:40:54.018048 1829252 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 23:38:46 old-k8s-version-130660 crio[634]: time="2024-07-31 23:38:46.899982742Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=dcf1dee3-8b64-4f71-af31-76ba64cd6c82 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:01 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:01.899876571Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=88bbc092-7d94-4914-a3a4-cfb636b0bd6f name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:01 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:01.900112984Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=88bbc092-7d94-4914-a3a4-cfb636b0bd6f name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:14 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:14.899751729Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=247be0dd-3fb0-4fa2-ba3f-b3c1bd34283f name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:14 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:14.899991359Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=247be0dd-3fb0-4fa2-ba3f-b3c1bd34283f name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:25 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:25.899657311Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=7a5ca698-53c8-4392-ac71-c4b3f4311b24 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:25 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:25.899903317Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=7a5ca698-53c8-4392-ac71-c4b3f4311b24 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:36 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:36.899668812Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0434eb9e-1d72-4dcf-8e21-37f1439b5dfc name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:36 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:36.899921808Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0434eb9e-1d72-4dcf-8e21-37f1439b5dfc name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:47 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:47.899686524Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=987e2000-23d7-4da5-a35b-a2804223d414 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:47 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:47.899937895Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=987e2000-23d7-4da5-a35b-a2804223d414 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:58 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:58.912839497Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=28d5b8c1-0749-463d-bcf8-8aa969781dd6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:39:58 old-k8s-version-130660 crio[634]: time="2024-07-31 23:39:58.913069117Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:489397,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=28d5b8c1-0749-463d-bcf8-8aa969781dd6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:40:02 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:02.899673228Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1eaeb963-6ff6-41ac-bfdf-9e91440753e5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:40:02 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:02.899928021Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1eaeb963-6ff6-41ac-bfdf-9e91440753e5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:40:16 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:16.899702216Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b2a4ab2b-023e-4bb6-96ef-dcfef381f09f name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:40:16 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:16.899941502Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b2a4ab2b-023e-4bb6-96ef-dcfef381f09f name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:40:27 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:27.899722062Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=be5b8575-0edf-4ac4-b3cf-3a88d99d2bea name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:40:27 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:27.899973614Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=be5b8575-0edf-4ac4-b3cf-3a88d99d2bea name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:40:38 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:38.900210123Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=7de4aa53-93e5-43a9-8e76-8fcdc23369db name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:40:38 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:38.900522720Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=7de4aa53-93e5-43a9-8e76-8fcdc23369db name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:40:50 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:50.900421533Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=12f3756a-4f78-44f2-8531-e0c0ec728eac name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:40:50 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:50.900651349Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=12f3756a-4f78-44f2-8531-e0c0ec728eac name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 23:40:50 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:50.901447392Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=dd767987-3c0b-4561-a664-5472dee7cadf name=/runtime.v1alpha2.ImageService/PullImage
	Jul 31 23:40:50 old-k8s-version-130660 crio[634]: time="2024-07-31 23:40:50.904821476Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	1b7540ca63d79       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 minutes ago       Exited              dashboard-metrics-scraper   5                   054246d4ac6ae       dashboard-metrics-scraper-8d5bb5db8-2djx2
	af4a5d8f7ff27       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           5 minutes ago       Running             storage-provisioner         1                   5c2f3c2cc5eea       storage-provisioner
	d6f8ffde1c928       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   5 minutes ago       Running             kubernetes-dashboard        0                   4609e01539fba       kubernetes-dashboard-cd95d586-d7gr2
	bab863fc639de       f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800                                           5 minutes ago       Running             kindnet-cni                 0                   cb7cb358f1a29       kindnet-mn77z
	c21ba7634bd1a       25a5233254979d0678a2db1d15b76b73dc380d81bc5eed93916ba5638b3cd894                                           5 minutes ago       Running             kube-proxy                  0                   e3896281f468c       kube-proxy-vsnfm
	de1b5cb3d1876       db91994f4ee8f894a1e8a6c1a76f615da8fc3c019300a3686291ce6fcbc57895                                           5 minutes ago       Running             coredns                     0                   f452ead0e2b49       coredns-74ff55c5b-nqxld
	e7eb6229b1138       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           5 minutes ago       Exited              storage-provisioner         0                   5c2f3c2cc5eea       storage-provisioner
	7f31493ecac90       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           5 minutes ago       Running             busybox                     0                   d2aef0590fa89       busybox
	9ea3346631678       e7605f88f17d6a4c3f083ef9c6f5f19b39f87e4d4406a05a8612b54a6ea57051                                           5 minutes ago       Running             kube-scheduler              0                   07731dd5a4bbe       kube-scheduler-old-k8s-version-130660
	46018fe62e056       2c08bbbc02d3aa5dfbf4e79f15c0a61424049288917aa10364464ca1f7de7157                                           5 minutes ago       Running             kube-apiserver              0                   20b068901f32d       kube-apiserver-old-k8s-version-130660
	6d2854e476f19       1df8a2b116bd16f7070fd383a6769c8d644b365575e8ffa3e492b84e4f05fc74                                           5 minutes ago       Running             kube-controller-manager     0                   eb210d4e4b833       kube-controller-manager-old-k8s-version-130660
	6854efb05017b       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28                                           5 minutes ago       Running             etcd                        0                   7a6e1e669010a       etcd-old-k8s-version-130660
	
	
	==> coredns [de1b5cb3d187661a9da8df949ef36da261421dac028ae8e070be86f4226a470b] <==
	I0731 23:35:40.078080       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-31 23:35:10.070755712 +0000 UTC m=+0.024212170) (total time: 30.004108197s):
	Trace[1427131847]: [30.004108197s] [30.004108197s] END
	I0731 23:35:40.078233       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-31 23:35:10.071295969 +0000 UTC m=+0.024752426) (total time: 30.003555608s):
	Trace[2019727887]: [30.003555608s] [30.003555608s] END
	E0731 23:35:40.078862       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0731 23:35:40.078874       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0731 23:35:40.078903       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-31 23:35:10.071660759 +0000 UTC m=+0.025117208) (total time: 30.003664146s):
	Trace[939984059]: [30.003664146s] [30.003664146s] END
	E0731 23:35:40.078939       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:54533 - 46215 "HINFO IN 2621100387478719196.2366068292622856925. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056196008s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:36222 - 27038 "HINFO IN 1953295842623008160.1100682652371033117. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031370362s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-130660
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-130660
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=old-k8s-version-130660
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T23_32_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:32:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-130660
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:40:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:36:08 +0000   Wed, 31 Jul 2024 23:32:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:36:08 +0000   Wed, 31 Jul 2024 23:32:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:36:08 +0000   Wed, 31 Jul 2024 23:32:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:36:08 +0000   Wed, 31 Jul 2024 23:33:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-130660
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 8357a49475f8484ca10c4e4c308f4805
	  System UUID:                d80ff0ed-14fa-4145-ac0d-4897475e9a9c
	  Boot ID:                    2daee006-f42a-4cec-a0b1-7137cc9806d6
	  Kernel Version:             5.15.0-1066-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  kube-system                 coredns-74ff55c5b-nqxld                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m7s
	  kube-system                 etcd-old-k8s-version-130660                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m16s
	  kube-system                 kindnet-mn77z                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m7s
	  kube-system                 kube-apiserver-old-k8s-version-130660             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-controller-manager-old-k8s-version-130660    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-proxy-vsnfm                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 kube-scheduler-old-k8s-version-130660             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 metrics-server-9975d5f86-w25p2                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m27s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-2djx2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-d7gr2               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m36s (x6 over 8m36s)  kubelet     Node old-k8s-version-130660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m36s (x6 over 8m36s)  kubelet     Node old-k8s-version-130660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m36s (x5 over 8m36s)  kubelet     Node old-k8s-version-130660 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m16s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m16s                  kubelet     Node old-k8s-version-130660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m16s                  kubelet     Node old-k8s-version-130660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m16s                  kubelet     Node old-k8s-version-130660 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m5s                   kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m46s                  kubelet     Node old-k8s-version-130660 status is now: NodeReady
	  Normal  Starting                 5m57s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m57s)  kubelet     Node old-k8s-version-130660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m57s)  kubelet     Node old-k8s-version-130660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x8 over 5m57s)  kubelet     Node old-k8s-version-130660 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m45s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001076] FS-Cache: O-key=[8] 'e9425c0100000000'
	[  +0.000699] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=0000000010dcae3a
	[  +0.001066] FS-Cache: N-key=[8] 'e9425c0100000000'
	[  +0.003323] FS-Cache: Duplicate cookie detected
	[  +0.000714] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000965] FS-Cache: O-cookie d=00000000f00cf1f7{9p.inode} n=000000008137feed
	[  +0.001033] FS-Cache: O-key=[8] 'e9425c0100000000'
	[  +0.000700] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000934] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=000000008229c300
	[  +0.001049] FS-Cache: N-key=[8] 'e9425c0100000000'
	[  +2.406021] FS-Cache: Duplicate cookie detected
	[  +0.000900] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.000989] FS-Cache: O-cookie d=00000000f00cf1f7{9p.inode} n=000000006a2b576b
	[  +0.001058] FS-Cache: O-key=[8] 'e8425c0100000000'
	[  +0.000713] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000953] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=0000000010dcae3a
	[  +0.001056] FS-Cache: N-key=[8] 'e8425c0100000000'
	[  +0.338126] FS-Cache: Duplicate cookie detected
	[  +0.000735] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001013] FS-Cache: O-cookie d=00000000f00cf1f7{9p.inode} n=00000000b39452db
	[  +0.001050] FS-Cache: O-key=[8] 'ee425c0100000000'
	[  +0.000704] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000939] FS-Cache: N-cookie d=00000000f00cf1f7{9p.inode} n=0000000096d94e31
	[  +0.001056] FS-Cache: N-key=[8] 'ee425c0100000000'
	
	
	==> etcd [6854efb05017bc711513167086bed436131771d8c2f42fe3910912ac80224c6c] <==
	2024-07-31 23:36:47.396396 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:36:57.396497 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:37:07.396344 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:37:17.396414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:37:27.396411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:37:37.396285 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:37:47.396370 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:37:57.396473 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:38:07.396641 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:38:17.396383 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:38:27.396338 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:38:37.396350 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:38:47.396471 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:38:57.396329 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:39:07.396308 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:39:17.396617 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:39:27.396428 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:39:37.396376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:39:47.396384 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:39:57.396818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:40:07.396322 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:40:17.396254 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:40:27.396337 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:40:37.396306 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-31 23:40:47.396247 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 23:40:55 up  7:23,  0 users,  load average: 0.95, 1.65, 2.49
	Linux old-k8s-version-130660 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bab863fc639de5e0d2e1165a89d0b3b22043e0ffdb80ad3296bca7a7262985f7] <==
	I0731 23:38:52.021675       1 main.go:299] handling current node
	I0731 23:39:02.021193       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:39:02.021230       1 main.go:299] handling current node
	I0731 23:39:12.014287       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:39:12.014324       1 main.go:299] handling current node
	I0731 23:39:22.013724       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:39:22.013764       1 main.go:299] handling current node
	I0731 23:39:32.021223       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:39:32.021353       1 main.go:299] handling current node
	I0731 23:39:42.021386       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:39:42.021425       1 main.go:299] handling current node
	I0731 23:39:52.019131       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:39:52.019284       1 main.go:299] handling current node
	I0731 23:40:02.023275       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:40:02.023391       1 main.go:299] handling current node
	I0731 23:40:12.013693       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:40:12.013735       1 main.go:299] handling current node
	I0731 23:40:22.021175       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:40:22.021281       1 main.go:299] handling current node
	I0731 23:40:32.021201       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:40:32.021237       1 main.go:299] handling current node
	I0731 23:40:42.022197       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:40:42.022256       1 main.go:299] handling current node
	I0731 23:40:52.017287       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0731 23:40:52.017351       1 main.go:299] handling current node
	
	
	==> kube-apiserver [46018fe62e056d85a9d92909a3ffe934843c413af90487065b86f79764b6e416] <==
	I0731 23:37:38.882034       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0731 23:37:38.882042       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0731 23:38:10.080413       1 client.go:360] parsed scheme: "passthrough"
	I0731 23:38:10.080457       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0731 23:38:10.080466       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0731 23:38:10.867986       1 handler_proxy.go:102] no RequestInfo found in the context
	E0731 23:38:10.868058       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 23:38:10.868066       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 23:38:51.142601       1 client.go:360] parsed scheme: "passthrough"
	I0731 23:38:51.142644       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0731 23:38:51.142653       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0731 23:39:25.957143       1 client.go:360] parsed scheme: "passthrough"
	I0731 23:39:25.957185       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0731 23:39:25.957193       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0731 23:40:07.465284       1 client.go:360] parsed scheme: "passthrough"
	I0731 23:40:07.465440       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0731 23:40:07.465476       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0731 23:40:08.877029       1 handler_proxy.go:102] no RequestInfo found in the context
	E0731 23:40:08.877164       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 23:40:08.877179       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 23:40:37.586805       1 client.go:360] parsed scheme: "passthrough"
	I0731 23:40:37.586933       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0731 23:40:37.586967       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [6d2854e476f196556888fd39fa1c3e23a43223b6bf7632e42e8a2d3e07316f00] <==
	W0731 23:36:32.360586       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0731 23:36:58.400694       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0731 23:37:04.011162       1 request.go:655] Throttling request took 1.047917103s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W0731 23:37:04.862583       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0731 23:37:28.903029       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0731 23:37:36.512961       1 request.go:655] Throttling request took 1.048290503s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0731 23:37:37.364516       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0731 23:37:59.405016       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0731 23:38:09.014778       1 request.go:655] Throttling request took 1.048214158s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0731 23:38:09.866328       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0731 23:38:29.906798       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0731 23:38:41.516784       1 request.go:655] Throttling request took 1.048355477s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0731 23:38:42.368195       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0731 23:39:00.408907       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0731 23:39:14.018633       1 request.go:655] Throttling request took 1.048495778s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0731 23:39:14.870354       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0731 23:39:30.914765       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0731 23:39:46.520782       1 request.go:655] Throttling request took 1.048469672s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0731 23:39:47.372394       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0731 23:40:01.416826       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0731 23:40:19.022828       1 request.go:655] Throttling request took 1.048300016s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0731 23:40:19.874468       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0731 23:40:31.919243       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0731 23:40:51.524885       1 request.go:655] Throttling request took 1.048378163s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0731 23:40:52.376307       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [c21ba7634bd1a874c9ebc36d102421cb8c623f6f4fc40ef723fae526a1693796] <==
	I0731 23:32:50.371923       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0731 23:32:50.381291       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0731 23:32:50.791345       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0731 23:32:50.791523       1 server_others.go:185] Using iptables Proxier.
	I0731 23:32:50.791895       1 server.go:650] Version: v1.20.0
	I0731 23:32:50.806445       1 config.go:315] Starting service config controller
	I0731 23:32:50.806556       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0731 23:32:50.806611       1 config.go:224] Starting endpoint slice config controller
	I0731 23:32:50.806669       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0731 23:32:50.943544       1 shared_informer.go:247] Caches are synced for service config 
	I0731 23:32:51.006913       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0731 23:35:10.390130       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0731 23:35:10.390209       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0731 23:35:10.410547       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0731 23:35:10.410636       1 server_others.go:185] Using iptables Proxier.
	I0731 23:35:10.410945       1 server.go:650] Version: v1.20.0
	I0731 23:35:10.411678       1 config.go:315] Starting service config controller
	I0731 23:35:10.411696       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0731 23:35:10.411712       1 config.go:224] Starting endpoint slice config controller
	I0731 23:35:10.411716       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0731 23:35:10.511842       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0731 23:35:10.511853       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [9ea334663167822d57c49e09d75be27e43c4d14410ab00696d0bd381e74ba548] <==
	E0731 23:32:27.669529       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 23:32:27.669618       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 23:32:27.669682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 23:32:27.669743       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 23:32:27.669800       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 23:32:27.669855       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 23:32:27.669917       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 23:32:27.669970       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 23:32:27.670025       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 23:32:27.670073       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 23:32:27.670130       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 23:32:27.700096       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 23:32:28.578195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 23:32:28.828228       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 23:32:30.738009       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0731 23:35:02.621448       1 serving.go:331] Generated self-signed cert in-memory
	W0731 23:35:07.673442       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 23:35:07.679740       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 23:35:07.679851       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 23:35:07.679880       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 23:35:08.077324       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 23:35:08.078129       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0731 23:35:08.078516       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0731 23:35:08.080558       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 23:35:08.281060       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jul 31 23:39:32 old-k8s-version-130660 kubelet[742]: I0731 23:39:32.899342     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1b7540ca63d793d42628186a8999cc39f9be115888f99c4961663ec84e58c5f2
	Jul 31 23:39:32 old-k8s-version-130660 kubelet[742]: E0731 23:39:32.900100     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	Jul 31 23:39:36 old-k8s-version-130660 kubelet[742]: E0731 23:39:36.900554     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 31 23:39:46 old-k8s-version-130660 kubelet[742]: I0731 23:39:46.899323     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1b7540ca63d793d42628186a8999cc39f9be115888f99c4961663ec84e58c5f2
	Jul 31 23:39:46 old-k8s-version-130660 kubelet[742]: E0731 23:39:46.899659     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	Jul 31 23:39:47 old-k8s-version-130660 kubelet[742]: E0731 23:39:47.900629     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 31 23:39:57 old-k8s-version-130660 kubelet[742]: I0731 23:39:57.899372     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1b7540ca63d793d42628186a8999cc39f9be115888f99c4961663ec84e58c5f2
	Jul 31 23:39:57 old-k8s-version-130660 kubelet[742]: E0731 23:39:57.899835     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	Jul 31 23:39:58 old-k8s-version-130660 kubelet[742]: E0731 23:39:58.947686     742 container_manager_linux.go:533] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/113aa41627b627283271b1441f6762c7553c9518cd510a6ea5aeb3e393c5884b, memory: /docker/113aa41627b627283271b1441f6762c7553c9518cd510a6ea5aeb3e393c5884b/system.slice/kubelet.service
	Jul 31 23:40:02 old-k8s-version-130660 kubelet[742]: E0731 23:40:02.900337     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 31 23:40:11 old-k8s-version-130660 kubelet[742]: I0731 23:40:11.899356     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1b7540ca63d793d42628186a8999cc39f9be115888f99c4961663ec84e58c5f2
	Jul 31 23:40:11 old-k8s-version-130660 kubelet[742]: E0731 23:40:11.899717     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	Jul 31 23:40:16 old-k8s-version-130660 kubelet[742]: E0731 23:40:16.900104     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: I0731 23:40:24.899403     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1b7540ca63d793d42628186a8999cc39f9be115888f99c4961663ec84e58c5f2
	Jul 31 23:40:24 old-k8s-version-130660 kubelet[742]: E0731 23:40:24.899764     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	Jul 31 23:40:27 old-k8s-version-130660 kubelet[742]: E0731 23:40:27.900206     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 31 23:40:37 old-k8s-version-130660 kubelet[742]: I0731 23:40:37.900121     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1b7540ca63d793d42628186a8999cc39f9be115888f99c4961663ec84e58c5f2
	Jul 31 23:40:37 old-k8s-version-130660 kubelet[742]: E0731 23:40:37.900394     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	Jul 31 23:40:38 old-k8s-version-130660 kubelet[742]: E0731 23:40:38.900915     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 31 23:40:50 old-k8s-version-130660 kubelet[742]: I0731 23:40:50.899725     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1b7540ca63d793d42628186a8999cc39f9be115888f99c4961663ec84e58c5f2
	Jul 31 23:40:50 old-k8s-version-130660 kubelet[742]: E0731 23:40:50.900033     742 pod_workers.go:191] Error syncing pod 62cef1bb-e263-4d9e-8953-1ad4573f338f ("dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2djx2_kubernetes-dashboard(62cef1bb-e263-4d9e-8953-1ad4573f338f)"
	Jul 31 23:40:50 old-k8s-version-130660 kubelet[742]: E0731 23:40:50.910039     742 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 31 23:40:50 old-k8s-version-130660 kubelet[742]: E0731 23:40:50.910095     742 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 31 23:40:50 old-k8s-version-130660 kubelet[742]: E0731 23:40:50.910237     742 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-kr52c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-w25p2_kube-system(180c0bc
6-049d-49d6-99f7-1e714957a21c): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 31 23:40:50 old-k8s-version-130660 kubelet[742]: E0731 23:40:50.910265     742 pod_workers.go:191] Error syncing pod 180c0bc6-049d-49d6-99f7-1e714957a21c ("metrics-server-9975d5f86-w25p2_kube-system(180c0bc6-049d-49d6-99f7-1e714957a21c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	
	
	==> kubernetes-dashboard [d6f8ffde1c928670ca20febc0c4a113d8ad3e22267d8fd745d5c54e8cc854a27] <==
	2024/07/31 23:35:32 Starting overwatch
	2024/07/31 23:35:32 Using namespace: kubernetes-dashboard
	2024/07/31 23:35:32 Using in-cluster config to connect to apiserver
	2024/07/31 23:35:32 Using secret token for csrf signing
	2024/07/31 23:35:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/31 23:35:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/31 23:35:32 Successful initial request to the apiserver, version: v1.20.0
	2024/07/31 23:35:32 Generating JWE encryption key
	2024/07/31 23:35:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/31 23:35:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/31 23:35:33 Initializing JWE encryption key from synchronized object
	2024/07/31 23:35:33 Creating in-cluster Sidecar client
	2024/07/31 23:35:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/31 23:35:33 Serving insecurely on HTTP port: 9090
	2024/07/31 23:36:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/31 23:36:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/31 23:37:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/31 23:37:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/31 23:38:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/31 23:38:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/31 23:39:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/31 23:39:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/31 23:40:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/31 23:40:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [af4a5d8f7ff273f31d82cb997af6a85d6437344b03032e0188e4e18ed9144b92] <==
	I0731 23:35:40.256455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 23:35:40.272246       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 23:35:40.272298       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 23:35:57.703286       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 23:35:57.703455       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-130660_c5fdeed3-ab2e-4cb4-8e19-c738de15768f!
	I0731 23:35:57.704829       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfc914c5-d339-44cd-8001-02f9570aee39", APIVersion:"v1", ResourceVersion:"825", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-130660_c5fdeed3-ab2e-4cb4-8e19-c738de15768f became leader
	I0731 23:35:57.804156       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-130660_c5fdeed3-ab2e-4cb4-8e19-c738de15768f!
	
	
	==> storage-provisioner [e7eb6229b113884a77ab02b300c2b453be8b1723c83dc8f8862391ed9ef419e1] <==
	I0731 23:33:14.768146       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 23:33:14.835948       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 23:33:14.836102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 23:33:14.890243       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 23:33:14.890464       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-130660_4ca33899-0055-4da1-b50e-ffa5654739b2!
	I0731 23:33:14.897872       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfc914c5-d339-44cd-8001-02f9570aee39", APIVersion:"v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-130660_4ca33899-0055-4da1-b50e-ffa5654739b2 became leader
	I0731 23:33:14.991616       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-130660_4ca33899-0055-4da1-b50e-ffa5654739b2!
	I0731 23:35:09.805699       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 23:35:39.807837       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-130660 -n old-k8s-version-130660
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-130660 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-w25p2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-130660 describe pod metrics-server-9975d5f86-w25p2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-130660 describe pod metrics-server-9975d5f86-w25p2: exit status 1 (96.845839ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-w25p2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-130660 describe pod metrics-server-9975d5f86-w25p2: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.06s)

                                                
                                    

Test pass (300/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.34
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 8.49
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.07
18 TestDownloadOnly/v1.30.3/DeleteAll 0.19
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 9.76
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.54
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 226.26
40 TestAddons/serial/GCPAuth/Namespaces 0.21
42 TestAddons/parallel/Registry 18.03
44 TestAddons/parallel/InspektorGadget 11.8
48 TestAddons/parallel/CSI 60.79
49 TestAddons/parallel/Headlamp 17.62
50 TestAddons/parallel/CloudSpanner 6.58
51 TestAddons/parallel/LocalPath 51.57
52 TestAddons/parallel/NvidiaDevicePlugin 6.56
53 TestAddons/parallel/Yakd 11.76
54 TestAddons/StoppedEnableDisable 12.15
55 TestCertOptions 40.57
56 TestCertExpiration 237.56
58 TestForceSystemdFlag 42.34
59 TestForceSystemdEnv 39.79
65 TestErrorSpam/setup 31.81
66 TestErrorSpam/start 0.67
67 TestErrorSpam/status 0.99
68 TestErrorSpam/pause 1.69
69 TestErrorSpam/unpause 1.76
70 TestErrorSpam/stop 1.39
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 59.27
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 44.24
77 TestFunctional/serial/KubeContext 0.06
78 TestFunctional/serial/KubectlGetPods 0.09
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.33
82 TestFunctional/serial/CacheCmd/cache/add_local 1.07
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.13
87 TestFunctional/serial/CacheCmd/cache/delete 0.12
88 TestFunctional/serial/MinikubeKubectlCmd 0.14
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
90 TestFunctional/serial/ExtraConfig 38.4
91 TestFunctional/serial/ComponentHealth 0.09
92 TestFunctional/serial/LogsCmd 1.68
93 TestFunctional/serial/LogsFileCmd 1.72
94 TestFunctional/serial/InvalidService 4.13
96 TestFunctional/parallel/ConfigCmd 0.46
97 TestFunctional/parallel/DashboardCmd 12.21
98 TestFunctional/parallel/DryRun 0.4
99 TestFunctional/parallel/InternationalLanguage 0.17
100 TestFunctional/parallel/StatusCmd 1.11
104 TestFunctional/parallel/ServiceCmdConnect 12.69
105 TestFunctional/parallel/AddonsCmd 0.2
106 TestFunctional/parallel/PersistentVolumeClaim 25.1
108 TestFunctional/parallel/SSHCmd 0.66
109 TestFunctional/parallel/CpCmd 2.37
111 TestFunctional/parallel/FileSync 0.32
112 TestFunctional/parallel/CertSync 2.23
116 TestFunctional/parallel/NodeLabels 0.11
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
120 TestFunctional/parallel/License 0.26
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.41
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
134 TestFunctional/parallel/ProfileCmd/profile_list 0.38
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
136 TestFunctional/parallel/MountCmd/any-port 7.36
137 TestFunctional/parallel/ServiceCmd/List 0.53
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
140 TestFunctional/parallel/ServiceCmd/Format 0.39
141 TestFunctional/parallel/ServiceCmd/URL 0.37
142 TestFunctional/parallel/MountCmd/specific-port 2.17
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.65
144 TestFunctional/parallel/Version/short 0.09
145 TestFunctional/parallel/Version/components 1.55
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
150 TestFunctional/parallel/ImageCommands/ImageBuild 2.67
151 TestFunctional/parallel/ImageCommands/Setup 0.78
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
155 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
156 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
157 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.64
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.95
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.81
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMultiControlPlane/serial/StartCluster 192.52
169 TestMultiControlPlane/serial/DeployApp 6.39
170 TestMultiControlPlane/serial/PingHostFromPods 1.6
171 TestMultiControlPlane/serial/AddWorkerNode 37.97
172 TestMultiControlPlane/serial/NodeLabels 0.11
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.82
174 TestMultiControlPlane/serial/CopyFile 18.72
175 TestMultiControlPlane/serial/StopSecondaryNode 12.72
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.59
177 TestMultiControlPlane/serial/RestartSecondaryNode 48.32
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.72
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 144.88
180 TestMultiControlPlane/serial/DeleteSecondaryNode 12.79
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
182 TestMultiControlPlane/serial/StopCluster 35.88
183 TestMultiControlPlane/serial/RestartCluster 97.68
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.56
185 TestMultiControlPlane/serial/AddSecondaryNode 47.53
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.76
190 TestJSONOutput/start/Command 59.41
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.74
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.63
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.8
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.22
215 TestKicCustomNetwork/create_custom_network 38.7
216 TestKicCustomNetwork/use_default_bridge_network 32.95
217 TestKicExistingNetwork 31.85
218 TestKicCustomSubnet 35.09
219 TestKicStaticIP 33.93
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 66.92
224 TestMountStart/serial/StartWithMountFirst 6.79
225 TestMountStart/serial/VerifyMountFirst 0.27
226 TestMountStart/serial/StartWithMountSecond 9.66
227 TestMountStart/serial/VerifyMountSecond 0.25
228 TestMountStart/serial/DeleteFirst 1.6
229 TestMountStart/serial/VerifyMountPostDelete 0.26
230 TestMountStart/serial/Stop 1.2
231 TestMountStart/serial/RestartStopped 8.29
232 TestMountStart/serial/VerifyMountPostStop 0.27
235 TestMultiNode/serial/FreshStart2Nodes 91.46
236 TestMultiNode/serial/DeployApp2Nodes 4.67
237 TestMultiNode/serial/PingHostFrom2Pods 0.97
238 TestMultiNode/serial/AddNode 32.34
239 TestMultiNode/serial/MultiNodeLabels 0.1
240 TestMultiNode/serial/ProfileList 0.32
241 TestMultiNode/serial/CopyFile 9.92
242 TestMultiNode/serial/StopNode 2.2
243 TestMultiNode/serial/StartAfterStop 9.83
244 TestMultiNode/serial/RestartKeepsNodes 115.83
245 TestMultiNode/serial/DeleteNode 5.7
246 TestMultiNode/serial/StopMultiNode 23.79
247 TestMultiNode/serial/RestartMultiNode 49
248 TestMultiNode/serial/ValidateNameConflict 34.84
253 TestPreload 142.83
255 TestScheduledStopUnix 107.54
258 TestInsufficientStorage 13.46
259 TestRunningBinaryUpgrade 75.71
261 TestKubernetesUpgrade 383.51
262 TestMissingContainerUpgrade 140.77
264 TestPause/serial/Start 68.56
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
267 TestNoKubernetes/serial/StartWithK8s 43.58
268 TestNoKubernetes/serial/StartWithStopK8s 6.88
269 TestNoKubernetes/serial/Start 9.76
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
271 TestNoKubernetes/serial/ProfileList 0.98
272 TestNoKubernetes/serial/Stop 1.25
273 TestNoKubernetes/serial/StartNoArgs 7.43
274 TestPause/serial/SecondStartNoReconfiguration 25.6
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
283 TestNetworkPlugins/group/false 4.97
287 TestPause/serial/Pause 1.07
288 TestPause/serial/VerifyStatus 0.36
289 TestPause/serial/Unpause 0.79
290 TestPause/serial/PauseAgain 1.13
291 TestPause/serial/DeletePaused 3.27
292 TestPause/serial/VerifyDeletedResources 0.45
293 TestStoppedBinaryUpgrade/Setup 0.75
294 TestStoppedBinaryUpgrade/Upgrade 86.16
302 TestNetworkPlugins/group/auto/Start 71.92
303 TestNetworkPlugins/group/auto/KubeletFlags 0.4
304 TestNetworkPlugins/group/auto/NetCatPod 12.37
305 TestStoppedBinaryUpgrade/MinikubeLogs 1.61
306 TestNetworkPlugins/group/kindnet/Start 71.29
307 TestNetworkPlugins/group/auto/DNS 0.27
308 TestNetworkPlugins/group/auto/Localhost 0.22
309 TestNetworkPlugins/group/auto/HairPin 0.2
310 TestNetworkPlugins/group/calico/Start 75.21
311 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.52
313 TestNetworkPlugins/group/kindnet/NetCatPod 12.38
314 TestNetworkPlugins/group/kindnet/DNS 0.27
315 TestNetworkPlugins/group/kindnet/Localhost 0.21
316 TestNetworkPlugins/group/kindnet/HairPin 0.19
317 TestNetworkPlugins/group/calico/ControllerPod 6.01
318 TestNetworkPlugins/group/calico/KubeletFlags 0.36
319 TestNetworkPlugins/group/calico/NetCatPod 12.38
320 TestNetworkPlugins/group/custom-flannel/Start 71.38
321 TestNetworkPlugins/group/calico/DNS 0.27
322 TestNetworkPlugins/group/calico/Localhost 0.23
323 TestNetworkPlugins/group/calico/HairPin 0.25
324 TestNetworkPlugins/group/enable-default-cni/Start 86.59
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.27
327 TestNetworkPlugins/group/custom-flannel/DNS 0.25
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
330 TestNetworkPlugins/group/flannel/Start 67.02
331 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
332 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
333 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
334 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
335 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
336 TestNetworkPlugins/group/bridge/Start 87.09
337 TestNetworkPlugins/group/flannel/ControllerPod 6.01
338 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
339 TestNetworkPlugins/group/flannel/NetCatPod 12.32
340 TestNetworkPlugins/group/flannel/DNS 0.3
341 TestNetworkPlugins/group/flannel/Localhost 0.21
342 TestNetworkPlugins/group/flannel/HairPin 0.22
344 TestStartStop/group/old-k8s-version/serial/FirstStart 160.54
345 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
346 TestNetworkPlugins/group/bridge/NetCatPod 10.29
347 TestNetworkPlugins/group/bridge/DNS 0.25
348 TestNetworkPlugins/group/bridge/Localhost 0.23
349 TestNetworkPlugins/group/bridge/HairPin 0.26
351 TestStartStop/group/no-preload/serial/FirstStart 67.16
352 TestStartStop/group/no-preload/serial/DeployApp 9.58
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
354 TestStartStop/group/no-preload/serial/Stop 11.98
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
356 TestStartStop/group/no-preload/serial/SecondStart 301.08
357 TestStartStop/group/old-k8s-version/serial/DeployApp 9.6
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.94
359 TestStartStop/group/old-k8s-version/serial/Stop 13.03
360 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
362 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.12
364 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
365 TestStartStop/group/no-preload/serial/Pause 3.15
367 TestStartStop/group/embed-certs/serial/FirstStart 61.85
368 TestStartStop/group/embed-certs/serial/DeployApp 8.4
369 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
370 TestStartStop/group/embed-certs/serial/Stop 12.03
371 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
373 TestStartStop/group/embed-certs/serial/SecondStart 277.63
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.14
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
376 TestStartStop/group/old-k8s-version/serial/Pause 4.4
378 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.42
379 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.36
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
381 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
383 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 265.82
384 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
385 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
386 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
387 TestStartStop/group/embed-certs/serial/Pause 3.13
389 TestStartStop/group/newest-cni/serial/FirstStart 36.65
390 TestStartStop/group/newest-cni/serial/DeployApp 0
391 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.47
392 TestStartStop/group/newest-cni/serial/Stop 1.28
393 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
394 TestStartStop/group/newest-cni/serial/SecondStart 16.4
395 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
396 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
397 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
398 TestStartStop/group/newest-cni/serial/Pause 3.25
399 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
400 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
401 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
402 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.88
x
+
TestDownloadOnly/v1.20.0/json-events (10.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-818567 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-818567 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.334878177s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-818567
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-818567: exit status 85 (76.283846ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-818567 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |          |
	|         | -p download-only-818567        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:31:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:31:02.152532 1584620 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:31:02.152656 1584620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:31:02.152669 1584620 out.go:304] Setting ErrFile to fd 2...
	I0731 22:31:02.152674 1584620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:31:02.152906 1584620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	W0731 22:31:02.153036 1584620 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19360-1579223/.minikube/config/config.json: open /home/jenkins/minikube-integration/19360-1579223/.minikube/config/config.json: no such file or directory
	I0731 22:31:02.153450 1584620 out.go:298] Setting JSON to true
	I0731 22:31:02.154290 1584620 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22401,"bootTime":1722442662,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 22:31:02.154358 1584620 start.go:139] virtualization:  
	I0731 22:31:02.157573 1584620 out.go:97] [download-only-818567] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0731 22:31:02.157821 1584620 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 22:31:02.157890 1584620 notify.go:220] Checking for updates...
	I0731 22:31:02.159439 1584620 out.go:169] MINIKUBE_LOCATION=19360
	I0731 22:31:02.161606 1584620 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:31:02.163560 1584620 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 22:31:02.165550 1584620 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	I0731 22:31:02.167318 1584620 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0731 22:31:02.170646 1584620 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 22:31:02.170971 1584620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:31:02.193208 1584620 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0731 22:31:02.193343 1584620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:31:02.260083 1584620 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-31 22:31:02.250066503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:31:02.260195 1584620 docker.go:307] overlay module found
	I0731 22:31:02.262420 1584620 out.go:97] Using the docker driver based on user configuration
	I0731 22:31:02.262445 1584620 start.go:297] selected driver: docker
	I0731 22:31:02.262452 1584620 start.go:901] validating driver "docker" against <nil>
	I0731 22:31:02.262565 1584620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:31:02.315187 1584620 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-31 22:31:02.305519088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:31:02.315371 1584620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 22:31:02.315630 1584620 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0731 22:31:02.315783 1584620 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 22:31:02.318347 1584620 out.go:169] Using Docker driver with root privileges
	I0731 22:31:02.320484 1584620 cni.go:84] Creating CNI manager for ""
	I0731 22:31:02.320518 1584620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 22:31:02.320530 1584620 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 22:31:02.320636 1584620 start.go:340] cluster config:
	{Name:download-only-818567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-818567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:31:02.322863 1584620 out.go:97] Starting "download-only-818567" primary control-plane node in "download-only-818567" cluster
	I0731 22:31:02.322895 1584620 cache.go:121] Beginning downloading kic base image for docker with crio
	I0731 22:31:02.325024 1584620 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0731 22:31:02.325057 1584620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 22:31:02.325243 1584620 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 22:31:02.340112 1584620 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 22:31:02.340659 1584620 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 22:31:02.340775 1584620 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 22:31:02.388903 1584620 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0731 22:31:02.388959 1584620 cache.go:56] Caching tarball of preloaded images
	I0731 22:31:02.389612 1584620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 22:31:02.392027 1584620 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 22:31:02.392059 1584620 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0731 22:31:02.475556 1584620 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0731 22:31:08.799771 1584620 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0731 22:31:08.799874 1584620 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0731 22:31:08.999420 1584620 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 22:31:09.926635 1584620 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 22:31:09.926992 1584620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/download-only-818567/config.json ...
	I0731 22:31:09.927025 1584620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/download-only-818567/config.json: {Name:mk82a911b46fa2e25bd93730f3145864d48efd2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:09.927645 1584620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 22:31:09.928240 1584620 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-818567 host does not exist
	  To start a cluster, run: "minikube start -p download-only-818567"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-818567
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (8.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-808614 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-808614 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.49065255s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (8.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-808614
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-808614: exit status 85 (66.310999ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-818567 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | -p download-only-818567        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:31 UTC |
	| delete  | -p download-only-818567        | download-only-818567 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:31 UTC |
	| start   | -o=json --download-only        | download-only-808614 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | -p download-only-808614        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:31:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:31:12.890664 1584825 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:31:12.890894 1584825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:31:12.890922 1584825 out.go:304] Setting ErrFile to fd 2...
	I0731 22:31:12.890939 1584825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:31:12.891225 1584825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 22:31:12.891757 1584825 out.go:298] Setting JSON to true
	I0731 22:31:12.892746 1584825 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22411,"bootTime":1722442662,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 22:31:12.892847 1584825 start.go:139] virtualization:  
	I0731 22:31:12.895265 1584825 out.go:97] [download-only-808614] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0731 22:31:12.895502 1584825 notify.go:220] Checking for updates...
	I0731 22:31:12.897546 1584825 out.go:169] MINIKUBE_LOCATION=19360
	I0731 22:31:12.899639 1584825 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:31:12.901559 1584825 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 22:31:12.903263 1584825 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	I0731 22:31:12.905249 1584825 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0731 22:31:12.908920 1584825 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 22:31:12.909304 1584825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:31:12.935032 1584825 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0731 22:31:12.935131 1584825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:31:12.989522 1584825 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-31 22:31:12.979540157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:31:12.989627 1584825 docker.go:307] overlay module found
	I0731 22:31:12.991586 1584825 out.go:97] Using the docker driver based on user configuration
	I0731 22:31:12.991616 1584825 start.go:297] selected driver: docker
	I0731 22:31:12.991623 1584825 start.go:901] validating driver "docker" against <nil>
	I0731 22:31:12.991724 1584825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:31:13.052049 1584825 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-31 22:31:13.042509515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:31:13.052218 1584825 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 22:31:13.052502 1584825 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0731 22:31:13.052664 1584825 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 22:31:13.054922 1584825 out.go:169] Using Docker driver with root privileges
	I0731 22:31:13.057078 1584825 cni.go:84] Creating CNI manager for ""
	I0731 22:31:13.057097 1584825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 22:31:13.057145 1584825 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 22:31:13.057256 1584825 start.go:340] cluster config:
	{Name:download-only-808614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-808614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:31:13.059577 1584825 out.go:97] Starting "download-only-808614" primary control-plane node in "download-only-808614" cluster
	I0731 22:31:13.059596 1584825 cache.go:121] Beginning downloading kic base image for docker with crio
	I0731 22:31:13.061560 1584825 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0731 22:31:13.061593 1584825 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:31:13.061759 1584825 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 22:31:13.077153 1584825 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 22:31:13.077276 1584825 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 22:31:13.077302 1584825 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 22:31:13.077308 1584825 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 22:31:13.077316 1584825 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 22:31:13.121155 1584825 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0731 22:31:13.121183 1584825 cache.go:56] Caching tarball of preloaded images
	I0731 22:31:13.121808 1584825 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:31:13.124826 1584825 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 22:31:13.124872 1584825 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 ...
	I0731 22:31:13.213845 1584825 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:bace9a3612be7d31e4d3c3d446951ced -> /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-808614 host does not exist
	  To start a cluster, run: "minikube start -p download-only-808614"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-808614
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (9.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-586553 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-586553 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.75887871s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (9.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-586553
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-586553: exit status 85 (72.764515ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-818567 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | -p download-only-818567             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:31 UTC |
	| delete  | -p download-only-818567             | download-only-818567 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:31 UTC |
	| start   | -o=json --download-only             | download-only-808614 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | -p download-only-808614             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:31 UTC |
	| delete  | -p download-only-808614             | download-only-808614 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC | 31 Jul 24 22:31 UTC |
	| start   | -o=json --download-only             | download-only-586553 | jenkins | v1.33.1 | 31 Jul 24 22:31 UTC |                     |
	|         | -p download-only-586553             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:31:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:31:21.769308 1585029 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:31:21.769523 1585029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:31:21.769551 1585029 out.go:304] Setting ErrFile to fd 2...
	I0731 22:31:21.769570 1585029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:31:21.769824 1585029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 22:31:21.770257 1585029 out.go:298] Setting JSON to true
	I0731 22:31:21.771188 1585029 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22420,"bootTime":1722442662,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 22:31:21.771278 1585029 start.go:139] virtualization:  
	I0731 22:31:21.774027 1585029 out.go:97] [download-only-586553] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0731 22:31:21.774307 1585029 notify.go:220] Checking for updates...
	I0731 22:31:21.775930 1585029 out.go:169] MINIKUBE_LOCATION=19360
	I0731 22:31:21.777664 1585029 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:31:21.779679 1585029 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 22:31:21.781942 1585029 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	I0731 22:31:21.783800 1585029 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0731 22:31:21.787495 1585029 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 22:31:21.787781 1585029 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:31:21.814618 1585029 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0731 22:31:21.814721 1585029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:31:21.869324 1585029 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-31 22:31:21.859641125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:31:21.869439 1585029 docker.go:307] overlay module found
	I0731 22:31:21.871756 1585029 out.go:97] Using the docker driver based on user configuration
	I0731 22:31:21.871780 1585029 start.go:297] selected driver: docker
	I0731 22:31:21.871788 1585029 start.go:901] validating driver "docker" against <nil>
	I0731 22:31:21.871898 1585029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:31:21.924444 1585029 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-31 22:31:21.915128024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:31:21.924613 1585029 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 22:31:21.924895 1585029 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0731 22:31:21.925049 1585029 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 22:31:21.927672 1585029 out.go:169] Using Docker driver with root privileges
	I0731 22:31:21.929375 1585029 cni.go:84] Creating CNI manager for ""
	I0731 22:31:21.929395 1585029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 22:31:21.929407 1585029 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 22:31:21.929494 1585029 start.go:340] cluster config:
	{Name:download-only-586553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-586553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:31:21.932038 1585029 out.go:97] Starting "download-only-586553" primary control-plane node in "download-only-586553" cluster
	I0731 22:31:21.932059 1585029 cache.go:121] Beginning downloading kic base image for docker with crio
	I0731 22:31:21.933702 1585029 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0731 22:31:21.933746 1585029 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 22:31:21.933912 1585029 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 22:31:21.948022 1585029 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 22:31:21.948137 1585029 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 22:31:21.948176 1585029 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 22:31:21.948186 1585029 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 22:31:21.948194 1585029 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 22:31:21.993231 1585029 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0731 22:31:21.993255 1585029 cache.go:56] Caching tarball of preloaded images
	I0731 22:31:21.993425 1585029 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 22:31:21.995884 1585029 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 22:31:21.995912 1585029 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0731 22:31:22.104961 1585029 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:70b5971c257ae4defe1f5d041a04e29c -> /home/jenkins/minikube-integration/19360-1579223/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-586553 host does not exist
	  To start a cluster, run: "minikube start -p download-only-586553"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-586553
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-970793 --alsologtostderr --binary-mirror http://127.0.0.1:38085 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-970793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-970793
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-849486
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-849486: exit status 85 (78.379108ms)

                                                
                                                
-- stdout --
	* Profile "addons-849486" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-849486"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-849486
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-849486: exit status 85 (64.230121ms)

                                                
                                                
-- stdout --
	* Profile "addons-849486" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-849486"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (226.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-849486 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-849486 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m46.261762703s)
--- PASS: TestAddons/Setup (226.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-849486 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-849486 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.499825ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-xsv4s" [505680ce-0882-4b35-957c-5038c3ef415e] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00413574s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7fzhl" [83d11338-5592-463e-b649-7ab9c5714f7d] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004924529s
addons_test.go:342: (dbg) Run:  kubectl --context addons-849486 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-849486 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-849486 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.751592287s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 ip
2024/07/31 22:35:55 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.03s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jmn88" [ecb73901-bf5b-411e-aef7-774775de16e5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.009517321s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-849486
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-849486: (5.789172855s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 12.027758ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-849486 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-849486 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [86e089fe-cfdc-49a9-94b3-c5685cd3ee64] Pending
helpers_test.go:344: "task-pv-pod" [86e089fe-cfdc-49a9-94b3-c5685cd3ee64] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [86e089fe-cfdc-49a9-94b3-c5685cd3ee64] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004555993s
addons_test.go:590: (dbg) Run:  kubectl --context addons-849486 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-849486 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-849486 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-849486 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-849486 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-849486 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-849486 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [aa9eb922-763f-4250-9564-671fc7c64147] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [aa9eb922-763f-4250-9564-671fc7c64147] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004317934s
addons_test.go:632: (dbg) Run:  kubectl --context addons-849486 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-849486 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-849486 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-849486 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.781039083s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-849486 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-4dtts" [fbac6863-99b6-4c73-addd-e286e1947782] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-4dtts" [fbac6863-99b6-4c73-addd-e286e1947782] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004022294s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-849486 addons disable headlamp --alsologtostderr -v=1: (5.687976664s)
--- PASS: TestAddons/parallel/Headlamp (17.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-g8rxr" [ecc4e71d-2302-4402-ac55-e0a44d171d8a] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005045144s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-849486
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.57s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-849486 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-849486 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-849486 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [03021c43-e7fc-4d54-8b52-5e30536ef0e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [03021c43-e7fc-4d54-8b52-5e30536ef0e0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [03021c43-e7fc-4d54-8b52-5e30536ef0e0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003735418s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-849486 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 ssh "cat /opt/local-path-provisioner/pvc-9f22855a-010f-402c-a661-b7cd21d58d00_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-849486 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-849486 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-849486 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.2963103s)
--- PASS: TestAddons/parallel/LocalPath (51.57s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tjbj7" [04df05f4-a9ce-4b7d-a544-2ed8988a7f7d] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004480075s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-849486
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-tz7dq" [e096a0b7-63d7-4e4a-a1ea-c59e6acf79d4] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004014888s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-849486 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-849486 addons disable yakd --alsologtostderr -v=1: (5.755882543s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.15s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-849486
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-849486: (11.868709804s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-849486
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-849486
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-849486
--- PASS: TestAddons/StoppedEnableDisable (12.15s)

                                                
                                    
x
+
TestCertOptions (40.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-417962 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-417962 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.967091766s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-417962 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-417962 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-417962 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-417962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-417962
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-417962: (1.978211354s)
--- PASS: TestCertOptions (40.57s)

                                                
                                    
x
+
TestCertExpiration (237.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-620866 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-620866 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.136502391s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-620866 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-620866 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.023934418s)
helpers_test.go:175: Cleaning up "cert-expiration-620866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-620866
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-620866: (2.401825634s)
--- PASS: TestCertExpiration (237.56s)

                                                
                                    
x
+
TestForceSystemdFlag (42.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-479978 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-479978 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.339903548s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-479978 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-479978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-479978
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-479978: (2.666296075s)
--- PASS: TestForceSystemdFlag (42.34s)

                                                
                                    
x
+
TestForceSystemdEnv (39.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-005661 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-005661 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.282690009s)
helpers_test.go:175: Cleaning up "force-systemd-env-005661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-005661
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-005661: (2.505669714s)
--- PASS: TestForceSystemdEnv (39.79s)

                                                
                                    
x
+
TestErrorSpam/setup (31.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-416326 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-416326 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-416326 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-416326 --driver=docker  --container-runtime=crio: (31.811663937s)
--- PASS: TestErrorSpam/setup (31.81s)

                                                
                                    
x
+
TestErrorSpam/start (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 start --dry-run
--- PASS: TestErrorSpam/start (0.67s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 stop: (1.209288279s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416326 --log_dir /tmp/nospam-416326 stop
--- PASS: TestErrorSpam/stop (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19360-1579223/.minikube/files/etc/test/nested/copy/1584615/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-818778 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-818778 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (59.263597239s)
--- PASS: TestFunctional/serial/StartWithProxy (59.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-818778 --alsologtostderr -v=8
E0731 22:45:20.331362 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:20.338081 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:20.348271 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:20.368480 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:20.408719 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:20.488990 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:20.649333 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:20.969550 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:21.610299 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:22.890991 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:25.451999 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:30.572294 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:45:40.813008 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-818778 --alsologtostderr -v=8: (44.238693736s)
functional_test.go:659: soft start took 44.242899875s for "functional-818778" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-818778 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-818778 cache add registry.k8s.io/pause:3.1: (1.488969252s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-818778 cache add registry.k8s.io/pause:3.3: (1.484771547s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-818778 cache add registry.k8s.io/pause:latest: (1.356016865s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-818778 /tmp/TestFunctionalserialCacheCmdcacheadd_local190933287/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 cache add minikube-local-cache-test:functional-818778
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 cache delete minikube-local-cache-test:functional-818778
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-818778
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-818778 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.791315ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-818778 cache reload: (1.209857108s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 kubectl -- --context functional-818778 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-818778 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.4s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-818778 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0731 22:46:01.293237 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-818778 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.401757679s)
functional_test.go:757: restart took 38.401919517s for "functional-818778" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.40s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-818778 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-818778 logs: (1.684485644s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 logs --file /tmp/TestFunctionalserialLogsFileCmd1732152687/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-818778 logs --file /tmp/TestFunctionalserialLogsFileCmd1732152687/001/logs.txt: (1.716808487s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-818778 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-818778
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-818778: exit status 115 (606.979179ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30744 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-818778 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-818778 config get cpus: exit status 14 (95.227839ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-818778 config get cpus: exit status 14 (80.484161ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-818778 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-818778 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1612977: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-818778 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-818778 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (188.346114ms)

                                                
                                                
-- stdout --
	* [functional-818778] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:47:08.993868 1612655 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:47:08.994085 1612655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:47:08.994098 1612655 out.go:304] Setting ErrFile to fd 2...
	I0731 22:47:08.994104 1612655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:47:08.994391 1612655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 22:47:08.994775 1612655 out.go:298] Setting JSON to false
	I0731 22:47:08.995796 1612655 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23367,"bootTime":1722442662,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 22:47:08.995867 1612655 start.go:139] virtualization:  
	I0731 22:47:08.998566 1612655 out.go:177] * [functional-818778] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0731 22:47:09.001170 1612655 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 22:47:09.001354 1612655 notify.go:220] Checking for updates...
	I0731 22:47:09.016090 1612655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:47:09.018164 1612655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 22:47:09.020091 1612655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	I0731 22:47:09.021842 1612655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 22:47:09.023614 1612655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:47:09.025909 1612655 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:47:09.026613 1612655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:47:09.061240 1612655 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0731 22:47:09.061364 1612655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:47:09.120905 1612655 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-31 22:47:09.111015153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:47:09.121024 1612655 docker.go:307] overlay module found
	I0731 22:47:09.123186 1612655 out.go:177] * Using the docker driver based on existing profile
	I0731 22:47:09.125010 1612655 start.go:297] selected driver: docker
	I0731 22:47:09.125027 1612655 start.go:901] validating driver "docker" against &{Name:functional-818778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-818778 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:47:09.125243 1612655 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:47:09.127568 1612655 out.go:177] 
	W0731 22:47:09.129356 1612655 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 22:47:09.131102 1612655 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-818778 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-818778 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-818778 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (169.405888ms)

                                                
                                                
-- stdout --
	* [functional-818778] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:47:08.830949 1612609 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:47:08.831152 1612609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:47:08.831181 1612609 out.go:304] Setting ErrFile to fd 2...
	I0731 22:47:08.831201 1612609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:47:08.831913 1612609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 22:47:08.832313 1612609 out.go:298] Setting JSON to false
	I0731 22:47:08.833330 1612609 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23367,"bootTime":1722442662,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 22:47:08.833439 1612609 start.go:139] virtualization:  
	I0731 22:47:08.836052 1612609 out.go:177] * [functional-818778] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0731 22:47:08.838503 1612609 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 22:47:08.838670 1612609 notify.go:220] Checking for updates...
	I0731 22:47:08.842918 1612609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:47:08.844715 1612609 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 22:47:08.846321 1612609 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	I0731 22:47:08.848296 1612609 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 22:47:08.850139 1612609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:47:08.852446 1612609 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:47:08.853039 1612609 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:47:08.874590 1612609 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0731 22:47:08.874704 1612609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:47:08.934091 1612609 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-31 22:47:08.92425611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:47:08.934198 1612609 docker.go:307] overlay module found
	I0731 22:47:08.936043 1612609 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0731 22:47:08.937801 1612609 start.go:297] selected driver: docker
	I0731 22:47:08.937819 1612609 start.go:901] validating driver "docker" against &{Name:functional-818778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-818778 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:47:08.937936 1612609 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:47:08.940044 1612609 out.go:177] 
	W0731 22:47:08.942014 1612609 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 22:47:08.943927 1612609 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-818778 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-818778 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-kkzvp" [31d833f0-96d9-493b-a13b-eb756814e040] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-kkzvp" [31d833f0-96d9-493b-a13b-eb756814e040] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003877809s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31737
functional_test.go:1671: http://192.168.49.2:31737: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-kkzvp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31737
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d3433ae4-7358-47ce-83ba-dd6ddae647d0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004687022s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-818778 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-818778 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-818778 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-818778 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b69f19b4-285e-4354-bef8-d14600055811] Pending
helpers_test.go:344: "sp-pod" [b69f19b4-285e-4354-bef8-d14600055811] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b69f19b4-285e-4354-bef8-d14600055811] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003653312s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-818778 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-818778 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-818778 delete -f testdata/storage-provisioner/pod.yaml: (1.156542268s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-818778 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f7cbcb68-946a-433f-afa4-ae0fc1b0cead] Pending
helpers_test.go:344: "sp-pod" [f7cbcb68-946a-433f-afa4-ae0fc1b0cead] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003814944s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-818778 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.10s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh -n functional-818778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 cp functional-818778:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2143936800/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh -n functional-818778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh -n functional-818778 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1584615/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "sudo cat /etc/test/nested/copy/1584615/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1584615.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "sudo cat /etc/ssl/certs/1584615.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1584615.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "sudo cat /usr/share/ca-certificates/1584615.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15846152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "sudo cat /etc/ssl/certs/15846152.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15846152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "sudo cat /usr/share/ca-certificates/15846152.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-818778 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-818778 ssh "sudo systemctl is-active docker": exit status 1 (361.983701ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-818778 ssh "sudo systemctl is-active containerd": exit status 1 (377.402478ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-818778 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-818778 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-818778 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1610298: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-818778 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-818778 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-818778 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [82648cbe-178d-4ee9-a937-6cf67314fc28] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [82648cbe-178d-4ee9-a937-6cf67314fc28] Running
E0731 22:46:42.254168 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003473609s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-818778 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.145.198 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-818778 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-818778 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-818778 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-rh9lz" [4f48c854-5cbe-4aca-b57a-d767d0003a4a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-rh9lz" [4f48c854-5cbe-4aca-b57a-d767d0003a4a] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.00408239s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "327.859183ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "53.693591ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "337.412217ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "54.796151ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-818778 /tmp/TestFunctionalparallelMountCmdany-port1798168006/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722466024509734094" to /tmp/TestFunctionalparallelMountCmdany-port1798168006/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722466024509734094" to /tmp/TestFunctionalparallelMountCmdany-port1798168006/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722466024509734094" to /tmp/TestFunctionalparallelMountCmdany-port1798168006/001/test-1722466024509734094
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-818778 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (394.084046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 22:47 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 22:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 22:47 test-1722466024509734094
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh cat /mount-9p/test-1722466024509734094
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-818778 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c86bf3b0-904c-429b-9e72-9b8b386a5481] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c86bf3b0-904c-429b-9e72-9b8b386a5481] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c86bf3b0-904c-429b-9e72-9b8b386a5481] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.012058766s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-818778 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-818778 /tmp/TestFunctionalparallelMountCmdany-port1798168006/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 service list -o json
functional_test.go:1490: Took "594.45324ms" to run "out/minikube-linux-arm64 -p functional-818778 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30693
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30693
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-818778 /tmp/TestFunctionalparallelMountCmdspecific-port580771562/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-818778 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (433.047808ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-818778 /tmp/TestFunctionalparallelMountCmdspecific-port580771562/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-818778 ssh "sudo umount -f /mount-9p": exit status 1 (278.214448ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-818778 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-818778 /tmp/TestFunctionalparallelMountCmdspecific-port580771562/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-818778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4166518437/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-818778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4166518437/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-818778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4166518437/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-818778 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-818778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4166518437/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-818778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4166518437/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-818778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4166518437/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-818778 version -o=json --components: (1.553148888s)
--- PASS: TestFunctional/parallel/Version/components (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-818778 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240719-e7903573
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-818778
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-818778 image ls --format short --alsologtostderr:
I0731 22:47:25.361771 1615336 out.go:291] Setting OutFile to fd 1 ...
I0731 22:47:25.361995 1615336 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:47:25.362008 1615336 out.go:304] Setting ErrFile to fd 2...
I0731 22:47:25.362012 1615336 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:47:25.362272 1615336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
I0731 22:47:25.362946 1615336 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:47:25.363150 1615336 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:47:25.363666 1615336 cli_runner.go:164] Run: docker container inspect functional-818778 --format={{.State.Status}}
I0731 22:47:25.394167 1615336 ssh_runner.go:195] Run: systemctl --version
I0731 22:47:25.394246 1615336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-818778
I0731 22:47:25.426490 1615336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34651 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/functional-818778/id_rsa Username:docker}
I0731 22:47:25.527075 1615336 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-818778 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | d7cd33d7d4ed1 | 46.7MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/kicbase/echo-server           | functional-818778  | ce2d2cda2d858 | 4.79MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5e32961ddcea3 | 90.3MB |
| docker.io/kindest/kindnetd              | v20240719-e7903573 | f42786f8afd22 | 90.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 2351f570ed0ea | 89.2MB |
| registry.k8s.io/kube-scheduler          | v1.30.3            | d48f992a22722 | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 61773190d42ff | 114MB  |
| docker.io/library/nginx                 | latest             | 43b17fe33c4b4 | 197MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 8e97cdb19e7cc | 108MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-818778 image ls --format table --alsologtostderr:
I0731 22:47:26.197505 1615545 out.go:291] Setting OutFile to fd 1 ...
I0731 22:47:26.198043 1615545 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:47:26.198058 1615545 out.go:304] Setting ErrFile to fd 2...
I0731 22:47:26.198064 1615545 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:47:26.198361 1615545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
I0731 22:47:26.199120 1615545 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:47:26.199279 1615545 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:47:26.199822 1615545 cli_runner.go:164] Run: docker container inspect functional-818778 --format={{.State.Status}}
I0731 22:47:26.228863 1615545 ssh_runner.go:195] Run: systemctl --version
I0731 22:47:26.228922 1615545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-818778
I0731 22:47:26.249232 1615545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34651 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/functional-818778/id_rsa Username:docker}
I0731 22:47:26.341855 1615545 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-818778 image ls --format json --alsologtostderr:
[{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":["registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"113538528"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66
ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"108229958"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9","docker.io/library/nginx@sha256:37d07a7f2aef3a0cc9ca4aafd9331c0796e47536c06a1f7304f98d69816baed7"],"repoTags":["docker.io/lib
rary/nginx:alpine"],"size":"46671358"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/
echo-server:functional-818778"],"size":"4788229"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":["docker.io/library/nginx@sha256:2732a234518030d4fd7a4562515a42d05d93a99faba1c2b07c68e0eeaa9ee65c","docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c"],"repoTags":["docker.io/library/nginx:latest"],"size":"197104786"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":["registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b889edf456303746e750129e4d7f20699f23e9a31acc","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"89199511"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4","registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf
28080ae6bd0626bd2c444edda987d7b95"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"61568326"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493","docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987"],"repoTags":["docker.io/kindest/kindnetd
:v20240715-585640e9"],"size":"90278450"},{"id":"f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800","repoDigests":["docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a","docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"],"repoTags":["docker.io/kindest/kindnetd:v20240719-e7903573"],"size":"90281007"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id"
:"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-818778 image ls --format json --alsologtostderr:
I0731 22:47:25.917313 1615461 out.go:291] Setting OutFile to fd 1 ...
I0731 22:47:25.917515 1615461 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:47:25.917528 1615461 out.go:304] Setting ErrFile to fd 2...
I0731 22:47:25.917533 1615461 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:47:25.917830 1615461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
I0731 22:47:25.918687 1615461 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:47:25.918857 1615461 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:47:25.919369 1615461 cli_runner.go:164] Run: docker container inspect functional-818778 --format={{.State.Status}}
I0731 22:47:25.938732 1615461 ssh_runner.go:195] Run: systemctl --version
I0731 22:47:25.938785 1615461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-818778
I0731 22:47:25.958782 1615461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34651 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/functional-818778/id_rsa Username:docker}
I0731 22:47:26.057807 1615461 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-818778 image ls --format yaml --alsologtostderr:
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "113538528"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:functional-818778
size: "4788229"
- id: 5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
- docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "90278450"
- id: f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800
repoDigests:
- docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a
- docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a
repoTags:
- docker.io/kindest/kindnetd:v20240719-e7903573
size: "90281007"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "108229958"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b889edf456303746e750129e4d7f20699f23e9a31acc
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "89199511"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
- registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf28080ae6bd0626bd2c444edda987d7b95
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "61568326"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests:
- docker.io/library/nginx@sha256:2732a234518030d4fd7a4562515a42d05d93a99faba1c2b07c68e0eeaa9ee65c
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
repoTags:
- docker.io/library/nginx:latest
size: "197104786"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
- docker.io/library/nginx@sha256:37d07a7f2aef3a0cc9ca4aafd9331c0796e47536c06a1f7304f98d69816baed7
repoTags:
- docker.io/library/nginx:alpine
size: "46671358"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-818778 image ls --format yaml --alsologtostderr:
I0731 22:47:25.631853 1615401 out.go:291] Setting OutFile to fd 1 ...
I0731 22:47:25.632073 1615401 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:47:25.632103 1615401 out.go:304] Setting ErrFile to fd 2...
I0731 22:47:25.632123 1615401 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:47:25.632375 1615401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
I0731 22:47:25.633136 1615401 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:47:25.633322 1615401 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:47:25.633877 1615401 cli_runner.go:164] Run: docker container inspect functional-818778 --format={{.State.Status}}
I0731 22:47:25.654451 1615401 ssh_runner.go:195] Run: systemctl --version
I0731 22:47:25.654504 1615401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-818778
I0731 22:47:25.683722 1615401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34651 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/functional-818778/id_rsa Username:docker}
I0731 22:47:25.782274 1615401 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-818778 ssh pgrep buildkitd: exit status 1 (334.019498ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image build -t localhost/my-image:functional-818778 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-818778 image build -t localhost/my-image:functional-818778 testdata/build --alsologtostderr: (2.088023838s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-818778 image build -t localhost/my-image:functional-818778 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f641035c295
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-818778
--> dd049ade9e5
Successfully tagged localhost/my-image:functional-818778
dd049ade9e52a4970342e770ee5831c136f6fb4f31fab13d9e4d56edb83125f7
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-818778 image build -t localhost/my-image:functional-818778 testdata/build --alsologtostderr:
I0731 22:47:26.148681 1615539 out.go:291] Setting OutFile to fd 1 ...
I0731 22:47:26.151786 1615539 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:47:26.151893 1615539 out.go:304] Setting ErrFile to fd 2...
I0731 22:47:26.151902 1615539 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:47:26.152916 1615539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
I0731 22:47:26.153846 1615539 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:47:26.155829 1615539 config.go:182] Loaded profile config "functional-818778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:47:26.156489 1615539 cli_runner.go:164] Run: docker container inspect functional-818778 --format={{.State.Status}}
I0731 22:47:26.179257 1615539 ssh_runner.go:195] Run: systemctl --version
I0731 22:47:26.179311 1615539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-818778
I0731 22:47:26.204969 1615539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34651 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/functional-818778/id_rsa Username:docker}
I0731 22:47:26.309741 1615539 build_images.go:161] Building image from path: /tmp/build.1259847206.tar
I0731 22:47:26.309832 1615539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 22:47:26.319583 1615539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1259847206.tar
I0731 22:47:26.323076 1615539 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1259847206.tar: stat -c "%s %y" /var/lib/minikube/build/build.1259847206.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1259847206.tar': No such file or directory
I0731 22:47:26.323140 1615539 ssh_runner.go:362] scp /tmp/build.1259847206.tar --> /var/lib/minikube/build/build.1259847206.tar (3072 bytes)
I0731 22:47:26.349957 1615539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1259847206
I0731 22:47:26.373940 1615539 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1259847206 -xf /var/lib/minikube/build/build.1259847206.tar
I0731 22:47:26.387168 1615539 crio.go:315] Building image: /var/lib/minikube/build/build.1259847206
I0731 22:47:26.387237 1615539 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-818778 /var/lib/minikube/build/build.1259847206 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0731 22:47:28.147799 1615539 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-818778 /var/lib/minikube/build/build.1259847206 --cgroup-manager=cgroupfs: (1.760533147s)
I0731 22:47:28.147879 1615539 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1259847206
I0731 22:47:28.159111 1615539 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1259847206.tar
I0731 22:47:28.169711 1615539 build_images.go:217] Built localhost/my-image:functional-818778 from /tmp/build.1259847206.tar
I0731 22:47:28.169742 1615539 build_images.go:133] succeeded building to: functional-818778
I0731 22:47:28.169747 1615539 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-818778
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image load --daemon docker.io/kicbase/echo-server:functional-818778 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-818778 image load --daemon docker.io/kicbase/echo-server:functional-818778 --alsologtostderr: (1.752150055s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image load --daemon docker.io/kicbase/echo-server:functional-818778 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-818778
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image load --daemon docker.io/kicbase/echo-server:functional-818778 --alsologtostderr
2024/07/31 22:47:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image save docker.io/kicbase/echo-server:functional-818778 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image rm docker.io/kicbase/echo-server:functional-818778 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-818778
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-818778 image save --daemon docker.io/kicbase/echo-server:functional-818778 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-818778
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-818778
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-818778
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-818778
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (192.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-684862 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0731 22:48:04.175201 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
E0731 22:50:20.329857 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-684862 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m11.711734196s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (192.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-684862 -- rollout status deployment/busybox: (3.535585346s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0731 22:50:48.015947 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-lmpzv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-swkmh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-vfhxp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-lmpzv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-swkmh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-vfhxp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-lmpzv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-swkmh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-vfhxp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-lmpzv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-lmpzv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-swkmh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-swkmh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-vfhxp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-684862 -- exec busybox-fc5497c4f-vfhxp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (37.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-684862 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-684862 -v=7 --alsologtostderr: (36.993261424s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (37.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-684862 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp testdata/cp-test.txt ha-684862:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile245682627/001/cp-test_ha-684862.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862:/home/docker/cp-test.txt ha-684862-m02:/home/docker/cp-test_ha-684862_ha-684862-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m02 "sudo cat /home/docker/cp-test_ha-684862_ha-684862-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862:/home/docker/cp-test.txt ha-684862-m03:/home/docker/cp-test_ha-684862_ha-684862-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m03 "sudo cat /home/docker/cp-test_ha-684862_ha-684862-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862:/home/docker/cp-test.txt ha-684862-m04:/home/docker/cp-test_ha-684862_ha-684862-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m04 "sudo cat /home/docker/cp-test_ha-684862_ha-684862-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp testdata/cp-test.txt ha-684862-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile245682627/001/cp-test_ha-684862-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m02:/home/docker/cp-test.txt ha-684862:/home/docker/cp-test_ha-684862-m02_ha-684862.txt
E0731 22:51:37.673379 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
E0731 22:51:37.679335 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
E0731 22:51:37.689584 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
E0731 22:51:37.709878 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
E0731 22:51:37.750200 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m02 "sudo cat /home/docker/cp-test.txt"
E0731 22:51:37.830536 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
E0731 22:51:37.990801 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862 "sudo cat /home/docker/cp-test_ha-684862-m02_ha-684862.txt"
E0731 22:51:38.312689 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m02:/home/docker/cp-test.txt ha-684862-m03:/home/docker/cp-test_ha-684862-m02_ha-684862-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m02 "sudo cat /home/docker/cp-test.txt"
E0731 22:51:38.954614 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m03 "sudo cat /home/docker/cp-test_ha-684862-m02_ha-684862-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m02:/home/docker/cp-test.txt ha-684862-m04:/home/docker/cp-test_ha-684862-m02_ha-684862-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m02 "sudo cat /home/docker/cp-test.txt"
E0731 22:51:40.235161 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m04 "sudo cat /home/docker/cp-test_ha-684862-m02_ha-684862-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp testdata/cp-test.txt ha-684862-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile245682627/001/cp-test_ha-684862-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m03:/home/docker/cp-test.txt ha-684862:/home/docker/cp-test_ha-684862-m03_ha-684862.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862 "sudo cat /home/docker/cp-test_ha-684862-m03_ha-684862.txt"
E0731 22:51:42.795325 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m03:/home/docker/cp-test.txt ha-684862-m02:/home/docker/cp-test_ha-684862-m03_ha-684862-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m02 "sudo cat /home/docker/cp-test_ha-684862-m03_ha-684862-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m03:/home/docker/cp-test.txt ha-684862-m04:/home/docker/cp-test_ha-684862-m03_ha-684862-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m04 "sudo cat /home/docker/cp-test_ha-684862-m03_ha-684862-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp testdata/cp-test.txt ha-684862-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile245682627/001/cp-test_ha-684862-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m04:/home/docker/cp-test.txt ha-684862:/home/docker/cp-test_ha-684862-m04_ha-684862.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862 "sudo cat /home/docker/cp-test_ha-684862-m04_ha-684862.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m04:/home/docker/cp-test.txt ha-684862-m02:/home/docker/cp-test_ha-684862-m04_ha-684862-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m04 "sudo cat /home/docker/cp-test.txt"
E0731 22:51:47.915895 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m02 "sudo cat /home/docker/cp-test_ha-684862-m04_ha-684862-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 cp ha-684862-m04:/home/docker/cp-test.txt ha-684862-m03:/home/docker/cp-test_ha-684862-m04_ha-684862-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 ssh -n ha-684862-m03 "sudo cat /home/docker/cp-test_ha-684862-m04_ha-684862-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 node stop m02 -v=7 --alsologtostderr
E0731 22:51:58.156942 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-684862 node stop m02 -v=7 --alsologtostderr: (11.994221284s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-684862 status -v=7 --alsologtostderr: exit status 7 (720.911825ms)

                                                
                                                
-- stdout --
	ha-684862
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-684862-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-684862-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-684862-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:52:01.571908 1631557 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:52:01.572064 1631557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:52:01.572075 1631557 out.go:304] Setting ErrFile to fd 2...
	I0731 22:52:01.572081 1631557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:52:01.572368 1631557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 22:52:01.572588 1631557 out.go:298] Setting JSON to false
	I0731 22:52:01.572644 1631557 mustload.go:65] Loading cluster: ha-684862
	I0731 22:52:01.572770 1631557 notify.go:220] Checking for updates...
	I0731 22:52:01.573093 1631557 config.go:182] Loaded profile config "ha-684862": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:52:01.573149 1631557 status.go:255] checking status of ha-684862 ...
	I0731 22:52:01.573646 1631557 cli_runner.go:164] Run: docker container inspect ha-684862 --format={{.State.Status}}
	I0731 22:52:01.592849 1631557 status.go:330] ha-684862 host status = "Running" (err=<nil>)
	I0731 22:52:01.592887 1631557 host.go:66] Checking if "ha-684862" exists ...
	I0731 22:52:01.593293 1631557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-684862
	I0731 22:52:01.619718 1631557 host.go:66] Checking if "ha-684862" exists ...
	I0731 22:52:01.620066 1631557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:52:01.620114 1631557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-684862
	I0731 22:52:01.637784 1631557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34656 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/ha-684862/id_rsa Username:docker}
	I0731 22:52:01.735199 1631557 ssh_runner.go:195] Run: systemctl --version
	I0731 22:52:01.740057 1631557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:52:01.756183 1631557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 22:52:01.812724 1631557 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-07-31 22:52:01.802594601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 22:52:01.813488 1631557 kubeconfig.go:125] found "ha-684862" server: "https://192.168.49.254:8443"
	I0731 22:52:01.813521 1631557 api_server.go:166] Checking apiserver status ...
	I0731 22:52:01.813577 1631557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:52:01.827515 1631557 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup
	I0731 22:52:01.837220 1631557 api_server.go:182] apiserver freezer: "5:freezer:/docker/d1c8e70e7c4b24d541a37032dbd441ce347243eb67068447408f4804f48c6c23/crio/crio-8ee83f8b8c2e83916d14336321e1fc8a532c3cfb97e6a4afd39b47c513519906"
	I0731 22:52:01.837309 1631557 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d1c8e70e7c4b24d541a37032dbd441ce347243eb67068447408f4804f48c6c23/crio/crio-8ee83f8b8c2e83916d14336321e1fc8a532c3cfb97e6a4afd39b47c513519906/freezer.state
	I0731 22:52:01.846726 1631557 api_server.go:204] freezer state: "THAWED"
	I0731 22:52:01.846755 1631557 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0731 22:52:01.856033 1631557 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0731 22:52:01.856069 1631557 status.go:422] ha-684862 apiserver status = Running (err=<nil>)
	I0731 22:52:01.856081 1631557 status.go:257] ha-684862 status: &{Name:ha-684862 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:52:01.856098 1631557 status.go:255] checking status of ha-684862-m02 ...
	I0731 22:52:01.856448 1631557 cli_runner.go:164] Run: docker container inspect ha-684862-m02 --format={{.State.Status}}
	I0731 22:52:01.877331 1631557 status.go:330] ha-684862-m02 host status = "Stopped" (err=<nil>)
	I0731 22:52:01.877402 1631557 status.go:343] host is not running, skipping remaining checks
	I0731 22:52:01.877417 1631557 status.go:257] ha-684862-m02 status: &{Name:ha-684862-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:52:01.877441 1631557 status.go:255] checking status of ha-684862-m03 ...
	I0731 22:52:01.877791 1631557 cli_runner.go:164] Run: docker container inspect ha-684862-m03 --format={{.State.Status}}
	I0731 22:52:01.894288 1631557 status.go:330] ha-684862-m03 host status = "Running" (err=<nil>)
	I0731 22:52:01.894311 1631557 host.go:66] Checking if "ha-684862-m03" exists ...
	I0731 22:52:01.894603 1631557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-684862-m03
	I0731 22:52:01.911237 1631557 host.go:66] Checking if "ha-684862-m03" exists ...
	I0731 22:52:01.911547 1631557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:52:01.911647 1631557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-684862-m03
	I0731 22:52:01.928881 1631557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34666 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/ha-684862-m03/id_rsa Username:docker}
	I0731 22:52:02.023773 1631557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:52:02.036931 1631557 kubeconfig.go:125] found "ha-684862" server: "https://192.168.49.254:8443"
	I0731 22:52:02.036962 1631557 api_server.go:166] Checking apiserver status ...
	I0731 22:52:02.037022 1631557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:52:02.048591 1631557 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	I0731 22:52:02.058904 1631557 api_server.go:182] apiserver freezer: "5:freezer:/docker/42d5f8bc48cfd42ef144110c01961685b1a376699a2b7b948755ff8d2240f0ab/crio/crio-55e6569f5cf57a8bfa455601e87c17066bc499424821621bba217b7abbe143a0"
	I0731 22:52:02.058982 1631557 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/42d5f8bc48cfd42ef144110c01961685b1a376699a2b7b948755ff8d2240f0ab/crio/crio-55e6569f5cf57a8bfa455601e87c17066bc499424821621bba217b7abbe143a0/freezer.state
	I0731 22:52:02.068023 1631557 api_server.go:204] freezer state: "THAWED"
	I0731 22:52:02.068052 1631557 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0731 22:52:02.076178 1631557 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0731 22:52:02.076225 1631557 status.go:422] ha-684862-m03 apiserver status = Running (err=<nil>)
	I0731 22:52:02.076241 1631557 status.go:257] ha-684862-m03 status: &{Name:ha-684862-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:52:02.076262 1631557 status.go:255] checking status of ha-684862-m04 ...
	I0731 22:52:02.076577 1631557 cli_runner.go:164] Run: docker container inspect ha-684862-m04 --format={{.State.Status}}
	I0731 22:52:02.095874 1631557 status.go:330] ha-684862-m04 host status = "Running" (err=<nil>)
	I0731 22:52:02.095901 1631557 host.go:66] Checking if "ha-684862-m04" exists ...
	I0731 22:52:02.096196 1631557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-684862-m04
	I0731 22:52:02.114236 1631557 host.go:66] Checking if "ha-684862-m04" exists ...
	I0731 22:52:02.114532 1631557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:52:02.114584 1631557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-684862-m04
	I0731 22:52:02.132616 1631557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34671 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/ha-684862-m04/id_rsa Username:docker}
	I0731 22:52:02.226506 1631557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:52:02.242224 1631557 status.go:257] ha-684862-m04 status: &{Name:ha-684862-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 node start m02 -v=7 --alsologtostderr
E0731 22:52:18.638090 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-684862 node start m02 -v=7 --alsologtostderr: (47.238578307s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (48.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-684862 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-684862 -v=7 --alsologtostderr
E0731 22:52:59.598373 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-684862 -v=7 --alsologtostderr: (36.909617133s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-684862 --wait=true -v=7 --alsologtostderr
E0731 22:54:21.519084 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-684862 --wait=true -v=7 --alsologtostderr: (1m47.813490929s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-684862
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 node delete m03 -v=7 --alsologtostderr
E0731 22:55:20.329340 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-684862 node delete m03 -v=7 --alsologtostderr: (11.917270885s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-684862 stop -v=7 --alsologtostderr: (35.767883513s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-684862 status -v=7 --alsologtostderr: exit status 7 (109.605002ms)

                                                
                                                
-- stdout --
	ha-684862
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-684862-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-684862-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:56:05.900294 1645459 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:56:05.900501 1645459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:56:05.900527 1645459 out.go:304] Setting ErrFile to fd 2...
	I0731 22:56:05.900545 1645459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:56:05.900834 1645459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 22:56:05.901048 1645459 out.go:298] Setting JSON to false
	I0731 22:56:05.901148 1645459 mustload.go:65] Loading cluster: ha-684862
	I0731 22:56:05.901202 1645459 notify.go:220] Checking for updates...
	I0731 22:56:05.901646 1645459 config.go:182] Loaded profile config "ha-684862": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:56:05.901686 1645459 status.go:255] checking status of ha-684862 ...
	I0731 22:56:05.902452 1645459 cli_runner.go:164] Run: docker container inspect ha-684862 --format={{.State.Status}}
	I0731 22:56:05.919979 1645459 status.go:330] ha-684862 host status = "Stopped" (err=<nil>)
	I0731 22:56:05.920023 1645459 status.go:343] host is not running, skipping remaining checks
	I0731 22:56:05.920051 1645459 status.go:257] ha-684862 status: &{Name:ha-684862 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:56:05.920084 1645459 status.go:255] checking status of ha-684862-m02 ...
	I0731 22:56:05.920380 1645459 cli_runner.go:164] Run: docker container inspect ha-684862-m02 --format={{.State.Status}}
	I0731 22:56:05.940602 1645459 status.go:330] ha-684862-m02 host status = "Stopped" (err=<nil>)
	I0731 22:56:05.940625 1645459 status.go:343] host is not running, skipping remaining checks
	I0731 22:56:05.940632 1645459 status.go:257] ha-684862-m02 status: &{Name:ha-684862-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:56:05.940659 1645459 status.go:255] checking status of ha-684862-m04 ...
	I0731 22:56:05.940955 1645459 cli_runner.go:164] Run: docker container inspect ha-684862-m04 --format={{.State.Status}}
	I0731 22:56:05.961293 1645459 status.go:330] ha-684862-m04 host status = "Stopped" (err=<nil>)
	I0731 22:56:05.961315 1645459 status.go:343] host is not running, skipping remaining checks
	I0731 22:56:05.961335 1645459 status.go:257] ha-684862-m04 status: &{Name:ha-684862-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (97.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-684862 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0731 22:56:37.672992 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
E0731 22:57:05.359977 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-684862 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m36.751256369s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (97.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (47.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-684862 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-684862 --control-plane -v=7 --alsologtostderr: (46.542702049s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-684862 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (47.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-181402 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-181402 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (59.404666656s)
--- PASS: TestJSONOutput/start/Command (59.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-181402 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-181402 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-181402 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-181402 --output=json --user=testUser: (5.796285916s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-866418 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-866418 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.528776ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d2bb6f49-2f2d-4f74-a332-1ed18690165d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-866418] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b585ffed-8aa8-4c79-9008-33d2090f806b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19360"}}
	{"specversion":"1.0","id":"2d5af7dd-a313-4f1f-b782-cd6c22bcd927","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c99b75dd-9da6-4519-af6c-3414ac336f47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig"}}
	{"specversion":"1.0","id":"8cef3f50-e37b-4dea-b1b8-6666a9eb1d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube"}}
	{"specversion":"1.0","id":"57ca2b72-94cc-42d3-8130-70cde40b21ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c6a07021-a10e-4ece-bd28-9bb32ae5d8ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dc903081-ef4f-4f69-8c48-d60e93838dec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-866418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-866418
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-501701 --network=
E0731 23:00:20.329343 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-501701 --network=: (36.502316087s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-501701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-501701
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-501701: (2.173414749s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-043217 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-043217 --network=bridge: (30.993591429s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-043217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-043217
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-043217: (1.930697765s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.95s)

                                                
                                    
x
+
TestKicExistingNetwork (31.85s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-511005 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-511005 --network=existing-network: (29.627016015s)
helpers_test.go:175: Cleaning up "existing-network-511005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-511005
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-511005: (2.052139542s)
--- PASS: TestKicExistingNetwork (31.85s)

                                                
                                    
x
+
TestKicCustomSubnet (35.09s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-093425 --subnet=192.168.60.0/24
E0731 23:01:37.673584 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
E0731 23:01:43.376235 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-093425 --subnet=192.168.60.0/24: (32.936964598s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-093425 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-093425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-093425
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-093425: (2.128487732s)
--- PASS: TestKicCustomSubnet (35.09s)

                                                
                                    
x
+
TestKicStaticIP (33.93s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-968550 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-968550 --static-ip=192.168.200.200: (31.731500297s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-968550 ip
helpers_test.go:175: Cleaning up "static-ip-968550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-968550
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-968550: (2.059697374s)
--- PASS: TestKicStaticIP (33.93s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (66.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-430763 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-430763 --driver=docker  --container-runtime=crio: (29.563463001s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-433643 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-433643 --driver=docker  --container-runtime=crio: (31.897801918s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-430763
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-433643
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-433643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-433643
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-433643: (1.964345023s)
helpers_test.go:175: Cleaning up "first-430763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-430763
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-430763: (2.2730118s)
--- PASS: TestMinikubeProfile (66.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-021837 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-021837 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.792856889s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-021837 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-035244 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-035244 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.656783403s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-035244 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-021837 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-021837 --alsologtostderr -v=5: (1.601196179s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-035244 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-035244
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-035244: (1.200887416s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-035244
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-035244: (7.292641833s)
--- PASS: TestMountStart/serial/RestartStopped (8.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-035244 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (91.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-246149 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0731 23:05:20.329833 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-246149 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m30.948774014s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (91.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-246149 -- rollout status deployment/busybox: (2.77029268s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- exec busybox-fc5497c4f-frq9w -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- exec busybox-fc5497c4f-s7fgw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- exec busybox-fc5497c4f-frq9w -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- exec busybox-fc5497c4f-s7fgw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- exec busybox-fc5497c4f-frq9w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- exec busybox-fc5497c4f-s7fgw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- exec busybox-fc5497c4f-frq9w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- exec busybox-fc5497c4f-frq9w -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- exec busybox-fc5497c4f-s7fgw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-246149 -- exec busybox-fc5497c4f-s7fgw -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (32.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-246149 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-246149 -v 3 --alsologtostderr: (31.662799838s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (32.34s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-246149 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp testdata/cp-test.txt multinode-246149:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp multinode-246149:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile696551372/001/cp-test_multinode-246149.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp multinode-246149:/home/docker/cp-test.txt multinode-246149-m02:/home/docker/cp-test_multinode-246149_multinode-246149-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m02 "sudo cat /home/docker/cp-test_multinode-246149_multinode-246149-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp multinode-246149:/home/docker/cp-test.txt multinode-246149-m03:/home/docker/cp-test_multinode-246149_multinode-246149-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m03 "sudo cat /home/docker/cp-test_multinode-246149_multinode-246149-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp testdata/cp-test.txt multinode-246149-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp multinode-246149-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile696551372/001/cp-test_multinode-246149-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp multinode-246149-m02:/home/docker/cp-test.txt multinode-246149:/home/docker/cp-test_multinode-246149-m02_multinode-246149.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149 "sudo cat /home/docker/cp-test_multinode-246149-m02_multinode-246149.txt"
E0731 23:06:37.673837 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp multinode-246149-m02:/home/docker/cp-test.txt multinode-246149-m03:/home/docker/cp-test_multinode-246149-m02_multinode-246149-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m03 "sudo cat /home/docker/cp-test_multinode-246149-m02_multinode-246149-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp testdata/cp-test.txt multinode-246149-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp multinode-246149-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile696551372/001/cp-test_multinode-246149-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp multinode-246149-m03:/home/docker/cp-test.txt multinode-246149:/home/docker/cp-test_multinode-246149-m03_multinode-246149.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149 "sudo cat /home/docker/cp-test_multinode-246149-m03_multinode-246149.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 cp multinode-246149-m03:/home/docker/cp-test.txt multinode-246149-m02:/home/docker/cp-test_multinode-246149-m03_multinode-246149-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 ssh -n multinode-246149-m02 "sudo cat /home/docker/cp-test_multinode-246149-m03_multinode-246149-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-246149 node stop m03: (1.215404517s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-246149 status: exit status 7 (496.566521ms)

                                                
                                                
-- stdout --
	multinode-246149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-246149-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-246149-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-246149 status --alsologtostderr: exit status 7 (491.386051ms)

                                                
                                                
-- stdout --
	multinode-246149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-246149-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-246149-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 23:06:43.534763 1699451 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:06:43.534914 1699451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:06:43.534927 1699451 out.go:304] Setting ErrFile to fd 2...
	I0731 23:06:43.534933 1699451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:06:43.535162 1699451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 23:06:43.535334 1699451 out.go:298] Setting JSON to false
	I0731 23:06:43.535369 1699451 mustload.go:65] Loading cluster: multinode-246149
	I0731 23:06:43.535502 1699451 notify.go:220] Checking for updates...
	I0731 23:06:43.535754 1699451 config.go:182] Loaded profile config "multinode-246149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:06:43.535764 1699451 status.go:255] checking status of multinode-246149 ...
	I0731 23:06:43.536215 1699451 cli_runner.go:164] Run: docker container inspect multinode-246149 --format={{.State.Status}}
	I0731 23:06:43.557232 1699451 status.go:330] multinode-246149 host status = "Running" (err=<nil>)
	I0731 23:06:43.557261 1699451 host.go:66] Checking if "multinode-246149" exists ...
	I0731 23:06:43.557544 1699451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-246149
	I0731 23:06:43.578931 1699451 host.go:66] Checking if "multinode-246149" exists ...
	I0731 23:06:43.579308 1699451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 23:06:43.579370 1699451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-246149
	I0731 23:06:43.599756 1699451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34776 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/multinode-246149/id_rsa Username:docker}
	I0731 23:06:43.690476 1699451 ssh_runner.go:195] Run: systemctl --version
	I0731 23:06:43.694785 1699451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:06:43.706598 1699451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 23:06:43.765774 1699451 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-07-31 23:06:43.75641991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 23:06:43.766338 1699451 kubeconfig.go:125] found "multinode-246149" server: "https://192.168.58.2:8443"
	I0731 23:06:43.766374 1699451 api_server.go:166] Checking apiserver status ...
	I0731 23:06:43.766430 1699451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:06:43.777672 1699451 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1415/cgroup
	I0731 23:06:43.786785 1699451 api_server.go:182] apiserver freezer: "5:freezer:/docker/68a0cdf37811278a16d9752d7702d9a61919b9f2571e2578d0b27dee83b07721/crio/crio-3d8c26db50804aeada3b58f245f8a8489f72fc27b0dea662423fce93209ce0ea"
	I0731 23:06:43.786859 1699451 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/68a0cdf37811278a16d9752d7702d9a61919b9f2571e2578d0b27dee83b07721/crio/crio-3d8c26db50804aeada3b58f245f8a8489f72fc27b0dea662423fce93209ce0ea/freezer.state
	I0731 23:06:43.795869 1699451 api_server.go:204] freezer state: "THAWED"
	I0731 23:06:43.795898 1699451 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0731 23:06:43.804379 1699451 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0731 23:06:43.804404 1699451 status.go:422] multinode-246149 apiserver status = Running (err=<nil>)
	I0731 23:06:43.804415 1699451 status.go:257] multinode-246149 status: &{Name:multinode-246149 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 23:06:43.804463 1699451 status.go:255] checking status of multinode-246149-m02 ...
	I0731 23:06:43.804796 1699451 cli_runner.go:164] Run: docker container inspect multinode-246149-m02 --format={{.State.Status}}
	I0731 23:06:43.820943 1699451 status.go:330] multinode-246149-m02 host status = "Running" (err=<nil>)
	I0731 23:06:43.820964 1699451 host.go:66] Checking if "multinode-246149-m02" exists ...
	I0731 23:06:43.821384 1699451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-246149-m02
	I0731 23:06:43.838025 1699451 host.go:66] Checking if "multinode-246149-m02" exists ...
	I0731 23:06:43.838330 1699451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 23:06:43.838378 1699451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-246149-m02
	I0731 23:06:43.854793 1699451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34781 SSHKeyPath:/home/jenkins/minikube-integration/19360-1579223/.minikube/machines/multinode-246149-m02/id_rsa Username:docker}
	I0731 23:06:43.946139 1699451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:06:43.957585 1699451 status.go:257] multinode-246149-m02 status: &{Name:multinode-246149-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0731 23:06:43.957627 1699451 status.go:255] checking status of multinode-246149-m03 ...
	I0731 23:06:43.957961 1699451 cli_runner.go:164] Run: docker container inspect multinode-246149-m03 --format={{.State.Status}}
	I0731 23:06:43.974235 1699451 status.go:330] multinode-246149-m03 host status = "Stopped" (err=<nil>)
	I0731 23:06:43.974260 1699451 status.go:343] host is not running, skipping remaining checks
	I0731 23:06:43.974267 1699451 status.go:257] multinode-246149-m03 status: &{Name:multinode-246149-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-246149 node start m03 -v=7 --alsologtostderr: (9.081269365s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (115.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-246149
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-246149
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-246149: (24.782558966s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-246149 --wait=true -v=8 --alsologtostderr
E0731 23:08:00.720872 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-246149 --wait=true -v=8 --alsologtostderr: (1m30.936006765s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-246149
--- PASS: TestMultiNode/serial/RestartKeepsNodes (115.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-246149 node delete m03: (5.036802607s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-246149 stop: (23.617561092s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-246149 status: exit status 7 (85.215702ms)

                                                
                                                
-- stdout --
	multinode-246149
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-246149-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-246149 status --alsologtostderr: exit status 7 (88.082635ms)

                                                
                                                
-- stdout --
	multinode-246149
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-246149-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 23:09:19.095828 1707248 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:09:19.095960 1707248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:09:19.096006 1707248 out.go:304] Setting ErrFile to fd 2...
	I0731 23:09:19.096013 1707248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:09:19.096368 1707248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 23:09:19.096556 1707248 out.go:298] Setting JSON to false
	I0731 23:09:19.096598 1707248 mustload.go:65] Loading cluster: multinode-246149
	I0731 23:09:19.096714 1707248 notify.go:220] Checking for updates...
	I0731 23:09:19.097020 1707248 config.go:182] Loaded profile config "multinode-246149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:09:19.097030 1707248 status.go:255] checking status of multinode-246149 ...
	I0731 23:09:19.097537 1707248 cli_runner.go:164] Run: docker container inspect multinode-246149 --format={{.State.Status}}
	I0731 23:09:19.116049 1707248 status.go:330] multinode-246149 host status = "Stopped" (err=<nil>)
	I0731 23:09:19.116070 1707248 status.go:343] host is not running, skipping remaining checks
	I0731 23:09:19.116077 1707248 status.go:257] multinode-246149 status: &{Name:multinode-246149 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 23:09:19.116105 1707248 status.go:255] checking status of multinode-246149-m02 ...
	I0731 23:09:19.116469 1707248 cli_runner.go:164] Run: docker container inspect multinode-246149-m02 --format={{.State.Status}}
	I0731 23:09:19.138637 1707248 status.go:330] multinode-246149-m02 host status = "Stopped" (err=<nil>)
	I0731 23:09:19.138672 1707248 status.go:343] host is not running, skipping remaining checks
	I0731 23:09:19.138680 1707248 status.go:257] multinode-246149-m02 status: &{Name:multinode-246149-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-246149 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-246149 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (48.17831431s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-246149 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-246149
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-246149-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-246149-m02 --driver=docker  --container-runtime=crio: exit status 14 (125.82687ms)

                                                
                                                
-- stdout --
	* [multinode-246149-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-246149-m02' is duplicated with machine name 'multinode-246149-m02' in profile 'multinode-246149'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-246149-m03 --driver=docker  --container-runtime=crio
E0731 23:10:20.329866 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-246149-m03 --driver=docker  --container-runtime=crio: (32.370049499s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-246149
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-246149: exit status 80 (314.219179ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-246149 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-246149-m03 already exists in multinode-246149-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-246149-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-246149-m03: (1.964204268s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.84s)

                                                
                                    
x
+
TestPreload (142.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-564467 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0731 23:11:37.673132 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-564467 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.648732269s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-564467 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-564467 image pull gcr.io/k8s-minikube/busybox: (1.849682154s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-564467
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-564467: (5.748910466s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-564467 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-564467 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (35.951047371s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-564467 image list
helpers_test.go:175: Cleaning up "test-preload-564467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-564467
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-564467: (2.389205397s)
--- PASS: TestPreload (142.83s)

                                                
                                    
x
+
TestScheduledStopUnix (107.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-928576 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-928576 --memory=2048 --driver=docker  --container-runtime=crio: (30.804404312s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-928576 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-928576 -n scheduled-stop-928576
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-928576 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-928576 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-928576 -n scheduled-stop-928576
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-928576
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-928576 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-928576
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-928576: exit status 7 (68.015238ms)

                                                
                                                
-- stdout --
	scheduled-stop-928576
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-928576 -n scheduled-stop-928576
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-928576 -n scheduled-stop-928576: exit status 7 (60.981355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-928576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-928576
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-928576: (5.244894495s)
--- PASS: TestScheduledStopUnix (107.54s)

                                                
                                    
x
+
TestInsufficientStorage (13.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-048193 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-048193 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.984720334s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dacb01f4-d660-48ca-bf68-018da9aa5473","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-048193] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f070f11-d1b6-4c4a-a29b-369118e138a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19360"}}
	{"specversion":"1.0","id":"45b7e8f1-bb76-44e4-8767-54769a2c0af3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"34f9a9b6-8636-485f-91bd-bccc4e20b9a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig"}}
	{"specversion":"1.0","id":"6d352cf8-de9b-4602-929f-bc7ad1ea92ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube"}}
	{"specversion":"1.0","id":"ee1eac2d-eed6-4d3e-854a-161931b1fa01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"edeb29a4-969b-4dcb-8007-a98971c2379f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eea6cbde-fab8-4edb-a278-4acc1120e9d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7c731ac3-02c5-4847-9196-1125ec27a963","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"802a425e-310a-49d2-a71d-d87eae6410a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"96f5aaa1-72a5-41ba-a30d-a789f5872e01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0ef114f4-16c2-4337-8d2c-a65220804c67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-048193\" primary control-plane node in \"insufficient-storage-048193\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"80fd1536-3ada-4613-8e51-edf4f1d82766","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721902582-19326 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3506a5c6-aed7-40b2-8e32-d1795c8f0466","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"484ba0ad-de0b-4847-8b8f-6bc69a93b8a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-048193 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-048193 --output=json --layout=cluster: exit status 7 (290.121825ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-048193","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-048193","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 23:15:08.753405 1725079 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-048193" does not appear in /home/jenkins/minikube-integration/19360-1579223/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-048193 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-048193 --output=json --layout=cluster: exit status 7 (279.412685ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-048193","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-048193","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 23:15:09.036521 1725140 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-048193" does not appear in /home/jenkins/minikube-integration/19360-1579223/kubeconfig
	E0731 23:15:09.047787 1725140 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/insufficient-storage-048193/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-048193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-048193
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-048193: (1.906951909s)
--- PASS: TestInsufficientStorage (13.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.948469756 start -p running-upgrade-202699 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.948469756 start -p running-upgrade-202699 --memory=2200 --vm-driver=docker  --container-runtime=crio: (41.403251153s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-202699 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-202699 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.869052453s)
helpers_test.go:175: Cleaning up "running-upgrade-202699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-202699
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-202699: (2.70775136s)
--- PASS: TestRunningBinaryUpgrade (75.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (383.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-437065 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0731 23:18:23.377037 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-437065 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.4483618s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-437065
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-437065: (1.236110144s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-437065 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-437065 status --format={{.Host}}: exit status 7 (70.072082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-437065 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0731 23:20:20.329834 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-437065 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.371681606s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-437065 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-437065 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-437065 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (104.629687ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-437065] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-437065
	    minikube start -p kubernetes-upgrade-437065 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4370652 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-437065 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-437065 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-437065 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.581713205s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-437065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-437065
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-437065: (2.574321697s)
--- PASS: TestKubernetesUpgrade (383.51s)

                                                
                                    
x
+
TestMissingContainerUpgrade (140.77s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3807592833 start -p missing-upgrade-165461 --memory=2200 --driver=docker  --container-runtime=crio
E0731 23:21:37.673635 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3807592833 start -p missing-upgrade-165461 --memory=2200 --driver=docker  --container-runtime=crio: (1m14.118829669s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-165461
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-165461: (10.427093489s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-165461
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-165461 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-165461 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (53.31452086s)
helpers_test.go:175: Cleaning up "missing-upgrade-165461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-165461
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-165461: (2.088681545s)
--- PASS: TestMissingContainerUpgrade (140.77s)

                                                
                                    
x
+
TestPause/serial/Start (68.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-311179 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-311179 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m8.55623797s)
--- PASS: TestPause/serial/Start (68.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-815185 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-815185 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (112.845893ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-815185] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-815185 --driver=docker  --container-runtime=crio
E0731 23:15:20.329386 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-815185 --driver=docker  --container-runtime=crio: (43.233583302s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-815185 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-815185 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-815185 --no-kubernetes --driver=docker  --container-runtime=crio: (4.482517694s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-815185 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-815185 status -o json: exit status 2 (347.436786ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-815185","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-815185
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-815185: (2.048504734s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-815185 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-815185 --no-kubernetes --driver=docker  --container-runtime=crio: (9.759563454s)
--- PASS: TestNoKubernetes/serial/Start (9.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-815185 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-815185 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.602273ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-815185
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-815185: (1.246432811s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-815185 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-815185 --driver=docker  --container-runtime=crio: (7.42513692s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.43s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (25.6s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-311179 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-311179 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.571536908s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (25.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-815185 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-815185 "sudo systemctl is-active --quiet service kubelet": exit status 1 (329.252114ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-570273 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-570273 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (242.111576ms)

                                                
                                                
-- stdout --
	* [false-570273] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 23:16:26.541341 1735621 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:16:26.541459 1735621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:16:26.541469 1735621 out.go:304] Setting ErrFile to fd 2...
	I0731 23:16:26.541475 1735621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:16:26.541708 1735621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1579223/.minikube/bin
	I0731 23:16:26.542088 1735621 out.go:298] Setting JSON to false
	I0731 23:16:26.542991 1735621 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25125,"bootTime":1722442662,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 23:16:26.543056 1735621 start.go:139] virtualization:  
	I0731 23:16:26.546431 1735621 out.go:177] * [false-570273] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0731 23:16:26.550024 1735621 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 23:16:26.550168 1735621 notify.go:220] Checking for updates...
	I0731 23:16:26.554942 1735621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:16:26.557132 1735621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1579223/kubeconfig
	I0731 23:16:26.559482 1735621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1579223/.minikube
	I0731 23:16:26.561995 1735621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 23:16:26.564271 1735621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:16:26.567081 1735621 config.go:182] Loaded profile config "pause-311179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:16:26.567221 1735621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:16:26.609970 1735621 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0731 23:16:26.610135 1735621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 23:16:26.705176 1735621 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-31 23:16:26.691906683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0731 23:16:26.705286 1735621 docker.go:307] overlay module found
	I0731 23:16:26.710540 1735621 out.go:177] * Using the docker driver based on user configuration
	I0731 23:16:26.713289 1735621 start.go:297] selected driver: docker
	I0731 23:16:26.713323 1735621 start.go:901] validating driver "docker" against <nil>
	I0731 23:16:26.713338 1735621 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:16:26.716099 1735621 out.go:177] 
	W0731 23:16:26.718689 1735621 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0731 23:16:26.725193 1735621 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-570273 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-570273" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-570273" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 31 Jul 2024 23:16:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-311179
contexts:
- context:
cluster: pause-311179
extensions:
- extension:
last-update: Wed, 31 Jul 2024 23:16:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-311179
name: pause-311179
current-context: pause-311179
kind: Config
preferences: {}
users:
- name: pause-311179
user:
client-certificate: /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/pause-311179/client.crt
client-key: /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/pause-311179/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-570273

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-570273"

                                                
                                                
----------------------- debugLogs end: false-570273 [took: 4.520121575s] --------------------------------
helpers_test.go:175: Cleaning up "false-570273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-570273
--- PASS: TestNetworkPlugins/group/false (4.97s)

                                                
                                    
x
+
TestPause/serial/Pause (1.07s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-311179 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-311179 --alsologtostderr -v=5: (1.070390813s)
--- PASS: TestPause/serial/Pause (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-311179 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-311179 --output=json --layout=cluster: exit status 2 (358.05746ms)

                                                
                                                
-- stdout --
	{"Name":"pause-311179","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-311179","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-311179 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.13s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-311179 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-311179 --alsologtostderr -v=5: (1.132422684s)
--- PASS: TestPause/serial/PauseAgain (1.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.27s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-311179 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-311179 --alsologtostderr -v=5: (3.271667508s)
--- PASS: TestPause/serial/DeletePaused (3.27s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-311179
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-311179: exit status 1 (16.951479ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-311179: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (86.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3614116484 start -p stopped-upgrade-830428 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0731 23:24:40.721305 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3614116484 start -p stopped-upgrade-830428 --memory=2200 --vm-driver=docker  --container-runtime=crio: (46.267829648s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3614116484 -p stopped-upgrade-830428 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3614116484 -p stopped-upgrade-830428 stop: (3.175477751s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-830428 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-830428 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.713895905s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (86.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0731 23:25:20.329639 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m11.919852274s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-570273 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-570273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tvxml" [58eead5f-236f-4127-a3a2-69d3685dc21c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tvxml" [58eead5f-236f-4127-a3a2-69d3685dc21c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004392935s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-830428
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-830428: (1.611132824s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.293429154s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-570273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m15.205023518s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hvt8k" [a7b9915a-f911-4a22-9f05-d7959cb589b7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004945389s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-570273 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-570273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cftts" [d9873da3-9b35-457c-9639-5e71c1662656] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cftts" [d9873da3-9b35-457c-9639-5e71c1662656] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00379259s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-570273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qhs8h" [096d3b70-1217-4fd9-baa9-617e883a253f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005841355s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-570273 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-570273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tcsrb" [97b111c7-829c-4e89-9dfc-930164605480] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tcsrb" [97b111c7-829c-4e89-9dfc-930164605480] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005447407s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.384185305s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-570273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m26.586781245s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-570273 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-570273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-mzmtb" [bea66d8b-cc71-40b3-8525-c981d00f7c7d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-mzmtb" [bea66d8b-cc71-40b3-8525-c981d00f7c7d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003252002s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-570273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.019847524s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-570273 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-570273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-slcsm" [c4959168-7aa7-44c5-b3bd-06488fbfcd34] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-slcsm" [c4959168-7aa7-44c5-b3bd-06488fbfcd34] Running
E0731 23:30:20.329804 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003255067s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-570273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-570273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m27.093765325s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7p6zh" [196a7edd-4f46-4755-882a-af1b8a8480a5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004456062s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-570273 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-570273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2mwxs" [97275820-dfe1-47f6-811c-a1a28f4c05d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0731 23:31:04.231997 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
E0731 23:31:04.237189 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
E0731 23:31:04.247415 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
E0731 23:31:04.267686 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
E0731 23:31:04.308000 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
E0731 23:31:04.388226 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
E0731 23:31:04.549238 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
E0731 23:31:04.869848 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
E0731 23:31:05.510186 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
E0731 23:31:06.790590 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-2mwxs" [97275820-dfe1-47f6-811c-a1a28f4c05d4] Running
E0731 23:31:09.351114 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004366987s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-570273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0731 23:31:14.471706 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (160.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-130660 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0731 23:31:37.673698 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
E0731 23:31:45.192737 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-130660 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m40.543444942s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (160.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-570273 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-570273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7dtxq" [6d281f0f-ffc9-4514-b7f9-3b81e5ebf0b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7dtxq" [6d281f0f-ffc9-4514-b7f9-3b81e5ebf0b5] Running
E0731 23:32:20.935566 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
E0731 23:32:20.940796 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
E0731 23:32:20.951013 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
E0731 23:32:20.971545 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004098576s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-570273 exec deployment/netcat -- nslookup kubernetes.default
E0731 23:32:21.012036 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
E0731 23:32:21.092605 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
E0731 23:32:21.253139 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-570273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0731 23:32:21.574033 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.26s)
E0731 23:47:01.774322 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-637585 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0731 23:32:56.183217 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:32:56.188550 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:32:56.198804 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:32:56.219136 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:32:56.259381 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:32:56.339635 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:32:56.499978 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:32:56.820504 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:32:57.460819 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:32:58.741171 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:33:01.301328 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:33:01.898256 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
E0731 23:33:06.422338 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:33:16.662689 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:33:37.143134 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:33:42.858623 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
E0731 23:33:48.074037 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-637585 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m7.162328351s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-637585 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7f1039c3-2f4c-4062-9e2b-f7f32302835c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7f1039c3-2f4c-4062-9e2b-f7f32302835c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.026993907s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-637585 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-637585 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-637585 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.035947643s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-637585 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-637585 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-637585 --alsologtostderr -v=3: (11.984090409s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-637585 -n no-preload-637585
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-637585 -n no-preload-637585: exit status 7 (88.401125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-637585 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (301.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-637585 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0731 23:34:14.575759 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:34:14.581030 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:34:14.592149 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:34:14.612398 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:34:14.653216 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:34:14.733436 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:34:14.894177 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:34:15.215128 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:34:15.856064 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:34:17.136325 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-637585 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (5m0.723009733s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-637585 -n no-preload-637585
E0731 23:39:14.575522 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (301.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-130660 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [204990b3-07b7-4675-ac9a-1bdb46d7a09f] Pending
E0731 23:34:18.104051 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
helpers_test.go:344: "busybox" [204990b3-07b7-4675-ac9a-1bdb46d7a09f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0731 23:34:19.697293 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
helpers_test.go:344: "busybox" [204990b3-07b7-4675-ac9a-1bdb46d7a09f] Running
E0731 23:34:24.817587 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003260001s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-130660 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-130660 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-130660 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.583553668s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-130660 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-130660 --alsologtostderr -v=3
E0731 23:34:35.058725 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-130660 --alsologtostderr -v=3: (13.030849722s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-130660 -n old-k8s-version-130660
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-130660 -n old-k8s-version-130660: exit status 7 (116.670789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-130660 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-w4jlf" [454caa8d-9acb-4fda-bc0c-05937598c621] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003942034s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-w4jlf" [454caa8d-9acb-4fda-bc0c-05937598c621] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004781245s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-637585 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-637585 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-637585 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-637585 -n no-preload-637585
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-637585 -n no-preload-637585: exit status 2 (305.275488ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-637585 -n no-preload-637585
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-637585 -n no-preload-637585: exit status 2 (299.413307ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-637585 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-637585 -n no-preload-637585
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-637585 -n no-preload-637585
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-442076 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0731 23:39:42.261468 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:39:54.832752 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
E0731 23:40:08.592270 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:40:20.329863 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-442076 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (1m1.853565321s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-442076 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5379a268-635f-4236-97fd-044409375a19] Pending
E0731 23:40:36.276253 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
helpers_test.go:344: "busybox" [5379a268-635f-4236-97fd-044409375a19] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5379a268-635f-4236-97fd-044409375a19] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004112044s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-442076 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-442076 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-442076 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.078704067s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-442076 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-442076 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-442076 --alsologtostderr -v=3: (12.031340416s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d7gr2" [5cbcfe3f-1a71-42e5-9613-939f6bd5ffde] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004336595s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-442076 -n embed-certs-442076
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-442076 -n embed-certs-442076: exit status 7 (69.198974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-442076 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (277.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-442076 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-442076 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m37.280941929s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-442076 -n embed-certs-442076
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (277.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d7gr2" [5cbcfe3f-1a71-42e5-9613-939f6bd5ffde] Running
E0731 23:41:04.232016 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002957664s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-130660 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-130660 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-130660 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-130660 --alsologtostderr -v=1: (1.18154236s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-130660 -n old-k8s-version-130660
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-130660 -n old-k8s-version-130660: exit status 2 (503.521605ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-130660 -n old-k8s-version-130660
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-130660 -n old-k8s-version-130660: exit status 2 (465.297984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-130660 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-130660 --alsologtostderr -v=1: (1.226063297s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-130660 -n old-k8s-version-130660
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-130660 -n old-k8s-version-130660
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-879501 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0731 23:41:20.722338 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
E0731 23:41:22.978036 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:41:37.672970 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
E0731 23:42:10.989017 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-879501 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (1m2.421380813s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-879501 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a83ea8d3-044d-42e7-8b88-843bdf9bc468] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0731 23:42:20.934746 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
helpers_test.go:344: "busybox" [a83ea8d3-044d-42e7-8b88-843bdf9bc468] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.008151743s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-879501 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-879501 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-879501 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-879501 --alsologtostderr -v=3
E0731 23:42:38.673835 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-879501 --alsologtostderr -v=3: (11.936691565s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-879501 -n default-k8s-diff-port-879501
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-879501 -n default-k8s-diff-port-879501: exit status 7 (67.546105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-879501 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-879501 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0731 23:42:56.186752 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/calico-570273/client.crt: no such file or directory
E0731 23:43:51.075011 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:43:51.080320 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:43:51.090721 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:43:51.110999 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:43:51.151269 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:43:51.231533 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:43:51.392566 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:43:51.713196 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:43:52.353823 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:43:53.634822 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:43:56.195005 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:44:01.315882 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:44:11.556169 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:44:14.575028 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/custom-flannel-570273/client.crt: no such file or directory
E0731 23:44:17.931868 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:17.937199 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:17.947506 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:17.967771 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:18.008169 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:18.088485 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:18.248956 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:18.569558 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:19.210340 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:20.490723 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:23.051400 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:28.172423 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:32.036470 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:44:38.413087 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:44:58.893822 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
E0731 23:45:08.592614 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/enable-default-cni-570273/client.crt: no such file or directory
E0731 23:45:12.996716 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:45:20.329036 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/addons-849486/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-879501 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m25.50287502s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-879501 -n default-k8s-diff-port-879501
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7c8j7" [f3f9fa55-5724-40a4-b66d-7f768e604188] Running
E0731 23:45:39.854064 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/old-k8s-version-130660/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003584352s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7c8j7" [f3f9fa55-5724-40a4-b66d-7f768e604188] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004196062s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-442076 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-442076 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-442076 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-442076 -n embed-certs-442076
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-442076 -n embed-certs-442076: exit status 2 (321.327392ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-442076 -n embed-certs-442076
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-442076 -n embed-certs-442076: exit status 2 (316.646427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-442076 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-442076 -n embed-certs-442076
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-442076 -n embed-certs-442076
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-509631 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0731 23:45:55.295320 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/flannel-570273/client.crt: no such file or directory
E0731 23:46:04.232368 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/auto-570273/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-509631 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (36.653412843s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-509631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-509631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.466805801s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-509631 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-509631 --alsologtostderr -v=3: (1.282944831s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-509631 -n newest-cni-509631
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-509631 -n newest-cni-509631: exit status 7 (72.647674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-509631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-509631 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0731 23:46:34.916968 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/no-preload-637585/client.crt: no such file or directory
E0731 23:46:37.673693 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-509631 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (15.888748799s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-509631 -n newest-cni-509631
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-509631 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-509631 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-509631 -n newest-cni-509631
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-509631 -n newest-cni-509631: exit status 2 (331.504326ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-509631 -n newest-cni-509631
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-509631 -n newest-cni-509631: exit status 2 (365.354101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-509631 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-509631 -n newest-cni-509631
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-509631 -n newest-cni-509631
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fhfw8" [da1fdf7f-1fb8-4cdf-a3a2-f92f87dbd072] Running
E0731 23:47:10.989825 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/bridge-570273/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003826342s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fhfw8" [da1fdf7f-1fb8-4cdf-a3a2-f92f87dbd072] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003874919s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-879501 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-879501 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-879501 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-879501 -n default-k8s-diff-port-879501
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-879501 -n default-k8s-diff-port-879501: exit status 2 (318.185504ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-879501 -n default-k8s-diff-port-879501
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-879501 -n default-k8s-diff-port-879501: exit status 2 (302.419769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-879501 --alsologtostderr -v=1
E0731 23:47:20.935347 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/kindnet-570273/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-879501 -n default-k8s-diff-port-879501
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-879501 -n default-k8s-diff-port-879501
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.88s)

                                                
                                    

Test skip (33/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-364967 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-364967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-364967
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-570273 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-570273" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-570273" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 31 Jul 2024 23:16:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-311179
contexts:
- context:
cluster: pause-311179
extensions:
- extension:
last-update: Wed, 31 Jul 2024 23:16:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-311179
name: pause-311179
current-context: pause-311179
kind: Config
preferences: {}
users:
- name: pause-311179
user:
client-certificate: /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/pause-311179/client.crt
client-key: /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/pause-311179/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-570273

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-570273"

                                                
                                                
----------------------- debugLogs end: kubenet-570273 [took: 3.136111257s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-570273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-570273
--- SKIP: TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-570273 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-570273" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19360-1579223/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 31 Jul 2024 23:16:30 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-311179
contexts:
- context:
cluster: pause-311179
extensions:
- extension:
last-update: Wed, 31 Jul 2024 23:16:30 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-311179
name: pause-311179
current-context: pause-311179
kind: Config
preferences: {}
users:
- name: pause-311179
user:
client-certificate: /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/pause-311179/client.crt
client-key: /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/pause-311179/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-570273

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-570273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570273"

                                                
                                                
----------------------- debugLogs end: cilium-570273 [took: 6.014934686s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-570273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-570273
E0731 23:16:37.673996 1584615 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1579223/.minikube/profiles/functional-818778/client.crt: no such file or directory
--- SKIP: TestNetworkPlugins/group/cilium (6.26s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-806531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-806531
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard