Test Report: Docker_Linux_crio_arm64 19265

                    
                      4b25178fc7513411450a4d543cff32ee34a2d14b:2024-07-17:35370
                    
                

Test fail (2/336)

Order failed test Duration
39 TestAddons/parallel/Ingress 151.68
41 TestAddons/parallel/MetricsServer 360.48
x
+
TestAddons/parallel/Ingress (151.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-579136 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-579136 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-579136 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8a21f915-6626-475d-926c-97c91ad987ff] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8a21f915-6626-475d-926c-97c91ad987ff] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003063932s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-579136 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.850647904s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-579136 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-579136 addons disable ingress-dns --alsologtostderr -v=1: (1.199505102s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-579136 addons disable ingress --alsologtostderr -v=1: (7.727557488s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-579136
helpers_test.go:235: (dbg) docker inspect addons-579136:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb",
	        "Created": "2024-07-17T00:06:23.189615279Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 9108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-17T00:06:23.373233785Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5d45b1ffc93797449a214b942992f529b6d45b715f4913615d5e219890c79f90",
	        "ResolvConfPath": "/var/lib/docker/containers/f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb/hosts",
	        "LogPath": "/var/lib/docker/containers/f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb/f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb-json.log",
	        "Name": "/addons-579136",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-579136:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-579136",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0a17a3986b00272011e0c74417d8b3b617230b0669e07894e467b83be32ef441-init/diff:/var/lib/docker/overlay2/5c52293bfcd82276da29849f51cb4ed3256d8b703926adbd5037ecfe280e85a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a17a3986b00272011e0c74417d8b3b617230b0669e07894e467b83be32ef441/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a17a3986b00272011e0c74417d8b3b617230b0669e07894e467b83be32ef441/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a17a3986b00272011e0c74417d8b3b617230b0669e07894e467b83be32ef441/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-579136",
	                "Source": "/var/lib/docker/volumes/addons-579136/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-579136",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-579136",
	                "name.minikube.sigs.k8s.io": "addons-579136",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb2eaf5e9e030d11de0eae75a2dc66af4c77a867b0feba85b943efc0aaa088b4",
	            "SandboxKey": "/var/run/docker/netns/eb2eaf5e9e03",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-579136": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c31bae7560a521b93c9ce69126042ea843fa0cebab7232bb7f7d9703da7242e2",
	                    "EndpointID": "0113948c6d5f2dfed84340b0671028f59fb49f2adec5be142955ca0ec216f61a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-579136",
	                        "f1485c08a869"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-579136 -n addons-579136
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-579136 logs -n 25: (1.433036673s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-740132                                                                     | download-only-740132   | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| delete  | -p download-only-439568                                                                     | download-only-439568   | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| delete  | -p download-only-915936                                                                     | download-only-915936   | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| delete  | -p download-only-740132                                                                     | download-only-740132   | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| start   | --download-only -p                                                                          | download-docker-852192 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | download-docker-852192                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-852192                                                                   | download-docker-852192 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-257090   | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | binary-mirror-257090                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46489                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-257090                                                                     | binary-mirror-257090   | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| addons  | enable dashboard -p                                                                         | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | addons-579136                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | addons-579136                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-579136 --wait=true                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | -p addons-579136                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-579136 ip                                                                            | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	| addons  | addons-579136 addons disable                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | -p addons-579136                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | addons-579136                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-579136 ssh cat                                                                       | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | /opt/local-path-provisioner/pvc-79ea8134-8c2c-4ed1-b98e-1d5f361ebf2b_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-579136 addons disable                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:10 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	|         | addons-579136                                                                               |                        |         |         |                     |                     |
	| addons  | addons-579136 addons                                                                        | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-579136 addons                                                                        | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-579136 ssh curl -s                                                                   | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-579136 ip                                                                            | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:12 UTC | 17 Jul 24 00:12 UTC |
	| addons  | addons-579136 addons disable                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:12 UTC | 17 Jul 24 00:12 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-579136 addons disable                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:12 UTC | 17 Jul 24 00:12 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:05:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:05:58.798832    8602 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:05:58.798944    8602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:05:58.798954    8602 out.go:304] Setting ErrFile to fd 2...
	I0717 00:05:58.798959    8602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:05:58.799289    8602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	I0717 00:05:58.800069    8602 out.go:298] Setting JSON to false
	I0717 00:05:58.800875    8602 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2911,"bootTime":1721171848,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 00:05:58.800974    8602 start.go:139] virtualization:  
	I0717 00:05:58.804728    8602 out.go:177] * [addons-579136] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 00:05:58.806836    8602 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:05:58.806902    8602 notify.go:220] Checking for updates...
	I0717 00:05:58.810855    8602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:05:58.812546    8602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	I0717 00:05:58.814432    8602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	I0717 00:05:58.816316    8602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 00:05:58.818209    8602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:05:58.820515    8602 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:05:58.846437    8602 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:05:58.846541    8602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:05:58.909059    8602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 00:05:58.900255911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:05:58.909170    8602 docker.go:307] overlay module found
	I0717 00:05:58.911333    8602 out.go:177] * Using the docker driver based on user configuration
	I0717 00:05:58.913158    8602 start.go:297] selected driver: docker
	I0717 00:05:58.913194    8602 start.go:901] validating driver "docker" against <nil>
	I0717 00:05:58.913212    8602 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:05:58.913819    8602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:05:58.973387    8602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 00:05:58.964065604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:05:58.973556    8602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:05:58.973811    8602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:05:58.975930    8602 out.go:177] * Using Docker driver with root privileges
	I0717 00:05:58.977705    8602 cni.go:84] Creating CNI manager for ""
	I0717 00:05:58.977724    8602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:05:58.977735    8602 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:05:58.977826    8602 start.go:340] cluster config:
	{Name:addons-579136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-579136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:05:58.979749    8602 out.go:177] * Starting "addons-579136" primary control-plane node in "addons-579136" cluster
	I0717 00:05:58.981988    8602 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 00:05:58.983817    8602 out.go:177] * Pulling base image v0.0.44-1721064868-19249 ...
	I0717 00:05:58.985675    8602 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:05:58.985728    8602 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0717 00:05:58.985752    8602 cache.go:56] Caching tarball of preloaded images
	I0717 00:05:58.985761    8602 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local docker daemon
	I0717 00:05:58.985830    8602 preload.go:172] Found /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0717 00:05:58.985840    8602 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:05:58.986180    8602 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/config.json ...
	I0717 00:05:58.986207    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/config.json: {Name:mk77004d90b416030051575e931c0c894a38ccf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:59.003973    8602 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:05:59.004108    8602 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory
	I0717 00:05:59.004136    8602 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory, skipping pull
	I0717 00:05:59.004150    8602 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c exists in cache, skipping pull
	I0717 00:05:59.004158    8602 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c as a tarball
	I0717 00:05:59.004164    8602 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c from local cache
	I0717 00:06:15.839576    8602 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c from cached tarball
	I0717 00:06:15.839614    8602 cache.go:194] Successfully downloaded all kic artifacts
	I0717 00:06:15.839651    8602 start.go:360] acquireMachinesLock for addons-579136: {Name:mkdf103692deb65e932cffd7ff6c86e49eeb0190 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:06:15.839761    8602 start.go:364] duration metric: took 86.78µs to acquireMachinesLock for "addons-579136"
	I0717 00:06:15.839804    8602 start.go:93] Provisioning new machine with config: &{Name:addons-579136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-579136 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:06:15.839891    8602 start.go:125] createHost starting for "" (driver="docker")
	I0717 00:06:15.842438    8602 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 00:06:15.842714    8602 start.go:159] libmachine.API.Create for "addons-579136" (driver="docker")
	I0717 00:06:15.842748    8602 client.go:168] LocalClient.Create starting
	I0717 00:06:15.842881    8602 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem
	I0717 00:06:16.275846    8602 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/cert.pem
	I0717 00:06:16.736873    8602 cli_runner.go:164] Run: docker network inspect addons-579136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 00:06:16.750297    8602 cli_runner.go:211] docker network inspect addons-579136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 00:06:16.750387    8602 network_create.go:284] running [docker network inspect addons-579136] to gather additional debugging logs...
	I0717 00:06:16.750408    8602 cli_runner.go:164] Run: docker network inspect addons-579136
	W0717 00:06:16.764232    8602 cli_runner.go:211] docker network inspect addons-579136 returned with exit code 1
	I0717 00:06:16.764258    8602 network_create.go:287] error running [docker network inspect addons-579136]: docker network inspect addons-579136: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-579136 not found
	I0717 00:06:16.764270    8602 network_create.go:289] output of [docker network inspect addons-579136]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-579136 not found
	
	** /stderr **
	I0717 00:06:16.764368    8602 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 00:06:16.780120    8602 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c77ac0}
	I0717 00:06:16.780166    8602 network_create.go:124] attempt to create docker network addons-579136 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 00:06:16.780222    8602 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-579136 addons-579136
	I0717 00:06:16.852730    8602 network_create.go:108] docker network addons-579136 192.168.49.0/24 created
	I0717 00:06:16.852778    8602 kic.go:121] calculated static IP "192.168.49.2" for the "addons-579136" container
	I0717 00:06:16.852853    8602 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 00:06:16.868920    8602 cli_runner.go:164] Run: docker volume create addons-579136 --label name.minikube.sigs.k8s.io=addons-579136 --label created_by.minikube.sigs.k8s.io=true
	I0717 00:06:16.886934    8602 oci.go:103] Successfully created a docker volume addons-579136
	I0717 00:06:16.887034    8602 cli_runner.go:164] Run: docker run --rm --name addons-579136-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-579136 --entrypoint /usr/bin/test -v addons-579136:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -d /var/lib
	I0717 00:06:18.958653    8602 cli_runner.go:217] Completed: docker run --rm --name addons-579136-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-579136 --entrypoint /usr/bin/test -v addons-579136:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -d /var/lib: (2.071562665s)
	I0717 00:06:18.958684    8602 oci.go:107] Successfully prepared a docker volume addons-579136
	I0717 00:06:18.958713    8602 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:06:18.958733    8602 kic.go:194] Starting extracting preloaded images to volume ...
	I0717 00:06:18.958874    8602 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-579136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 00:06:23.123140    8602 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-579136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -I lz4 -xf /preloaded.tar -C /extractDir: (4.16421926s)
	I0717 00:06:23.123175    8602 kic.go:203] duration metric: took 4.164437589s to extract preloaded images to volume ...
	W0717 00:06:23.123316    8602 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 00:06:23.123429    8602 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 00:06:23.174777    8602 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-579136 --name addons-579136 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-579136 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-579136 --network addons-579136 --ip 192.168.49.2 --volume addons-579136:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c
	I0717 00:06:23.535643    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Running}}
	I0717 00:06:23.556735    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:06:23.579536    8602 cli_runner.go:164] Run: docker exec addons-579136 stat /var/lib/dpkg/alternatives/iptables
	I0717 00:06:23.656394    8602 oci.go:144] the created container "addons-579136" has a running status.
	I0717 00:06:23.656427    8602 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa...
	I0717 00:06:24.287958    8602 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 00:06:24.318159    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:06:24.345553    8602 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 00:06:24.345578    8602 kic_runner.go:114] Args: [docker exec --privileged addons-579136 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 00:06:24.412616    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:06:24.435254    8602 machine.go:94] provisionDockerMachine start ...
	I0717 00:06:24.435341    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:24.455456    8602 main.go:141] libmachine: Using SSH client type: native
	I0717 00:06:24.455850    8602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:06:24.455863    8602 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:06:24.590617    8602 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-579136
	
	I0717 00:06:24.590642    8602 ubuntu.go:169] provisioning hostname "addons-579136"
	I0717 00:06:24.590707    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:24.607310    8602 main.go:141] libmachine: Using SSH client type: native
	I0717 00:06:24.607598    8602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:06:24.607613    8602 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-579136 && echo "addons-579136" | sudo tee /etc/hostname
	I0717 00:06:24.750066    8602 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-579136
	
	I0717 00:06:24.750141    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:24.767268    8602 main.go:141] libmachine: Using SSH client type: native
	I0717 00:06:24.767509    8602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:06:24.767525    8602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-579136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-579136/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-579136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:06:24.894841    8602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:06:24.894868    8602 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19265-2269/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-2269/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-2269/.minikube}
	I0717 00:06:24.894892    8602 ubuntu.go:177] setting up certificates
	I0717 00:06:24.894902    8602 provision.go:84] configureAuth start
	I0717 00:06:24.894962    8602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-579136
	I0717 00:06:24.911118    8602 provision.go:143] copyHostCerts
	I0717 00:06:24.911209    8602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-2269/.minikube/ca.pem (1078 bytes)
	I0717 00:06:24.911332    8602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-2269/.minikube/cert.pem (1123 bytes)
	I0717 00:06:24.911391    8602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-2269/.minikube/key.pem (1679 bytes)
	I0717 00:06:24.911469    8602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-2269/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca-key.pem org=jenkins.addons-579136 san=[127.0.0.1 192.168.49.2 addons-579136 localhost minikube]
	I0717 00:06:25.459642    8602 provision.go:177] copyRemoteCerts
	I0717 00:06:25.459714    8602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:06:25.459755    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:25.475579    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:06:25.567235    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 00:06:25.590581    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:06:25.613942    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:06:25.637074    8602 provision.go:87] duration metric: took 742.156097ms to configureAuth
	I0717 00:06:25.637103    8602 ubuntu.go:193] setting minikube options for container-runtime
	I0717 00:06:25.637288    8602 config.go:182] Loaded profile config "addons-579136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:06:25.637398    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:25.653757    8602 main.go:141] libmachine: Using SSH client type: native
	I0717 00:06:25.654008    8602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:06:25.654028    8602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:06:25.881872    8602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:06:25.881899    8602 machine.go:97] duration metric: took 1.446626745s to provisionDockerMachine
	I0717 00:06:25.881910    8602 client.go:171] duration metric: took 10.039150985s to LocalClient.Create
	I0717 00:06:25.881922    8602 start.go:167] duration metric: took 10.039208472s to libmachine.API.Create "addons-579136"
	I0717 00:06:25.881930    8602 start.go:293] postStartSetup for "addons-579136" (driver="docker")
	I0717 00:06:25.881942    8602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:06:25.882009    8602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:06:25.882069    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:25.898919    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:06:25.992199    8602 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:06:25.995343    8602 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 00:06:25.995382    8602 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 00:06:25.995393    8602 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 00:06:25.995399    8602 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0717 00:06:25.995410    8602 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-2269/.minikube/addons for local assets ...
	I0717 00:06:25.995484    8602 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-2269/.minikube/files for local assets ...
	I0717 00:06:25.995513    8602 start.go:296] duration metric: took 113.575913ms for postStartSetup
	I0717 00:06:25.995825    8602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-579136
	I0717 00:06:26.011354    8602 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/config.json ...
	I0717 00:06:26.011633    8602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:06:26.011695    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:26.029759    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:06:26.123460    8602 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 00:06:26.127771    8602 start.go:128] duration metric: took 10.287865941s to createHost
	I0717 00:06:26.127796    8602 start.go:83] releasing machines lock for "addons-579136", held for 10.288020916s
	I0717 00:06:26.127868    8602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-579136
	I0717 00:06:26.144522    8602 ssh_runner.go:195] Run: cat /version.json
	I0717 00:06:26.144549    8602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:06:26.144583    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:26.144592    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:26.162314    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:06:26.164775    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:06:26.250197    8602 ssh_runner.go:195] Run: systemctl --version
	I0717 00:06:26.381267    8602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:06:26.522784    8602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 00:06:26.526747    8602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:06:26.546970    8602 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 00:06:26.547084    8602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:06:26.577386    8602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 00:06:26.577455    8602 start.go:495] detecting cgroup driver to use...
	I0717 00:06:26.577500    8602 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 00:06:26.577582    8602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:06:26.592419    8602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:06:26.603548    8602 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:06:26.603640    8602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:06:26.617198    8602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:06:26.631446    8602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:06:26.716086    8602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:06:26.804528    8602 docker.go:233] disabling docker service ...
	I0717 00:06:26.804621    8602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:06:26.822706    8602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:06:26.835110    8602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:06:26.912886    8602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:06:26.997678    8602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:06:27.009379    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:06:27.024882    8602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:06:27.024990    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.035299    8602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:06:27.035414    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.047736    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.058381    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.068470    8602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:06:27.077663    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.087414    8602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.103392    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.112734    8602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:06:27.121416    8602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:06:27.129716    8602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:06:27.209088    8602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:06:27.317896    8602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:06:27.317973    8602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:06:27.321310    8602 start.go:563] Will wait 60s for crictl version
	I0717 00:06:27.321415    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:06:27.324658    8602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:06:27.366722    8602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 00:06:27.366925    8602 ssh_runner.go:195] Run: crio --version
	I0717 00:06:27.404159    8602 ssh_runner.go:195] Run: crio --version
	I0717 00:06:27.452122    8602 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.24.6 ...
	I0717 00:06:27.454156    8602 cli_runner.go:164] Run: docker network inspect addons-579136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 00:06:27.469880    8602 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 00:06:27.473474    8602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:06:27.484111    8602 kubeadm.go:883] updating cluster {Name:addons-579136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-579136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:06:27.484231    8602 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:06:27.484291    8602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:06:27.559845    8602 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:06:27.559870    8602 crio.go:433] Images already preloaded, skipping extraction
	I0717 00:06:27.559927    8602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:06:27.597556    8602 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:06:27.597578    8602 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:06:27.597586    8602 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.2 crio true true} ...
	I0717 00:06:27.597692    8602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-579136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-579136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:06:27.597780    8602 ssh_runner.go:195] Run: crio config
	I0717 00:06:27.664543    8602 cni.go:84] Creating CNI manager for ""
	I0717 00:06:27.664564    8602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:06:27.664574    8602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:06:27.664613    8602 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-579136 NodeName:addons-579136 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:06:27.664780    8602 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-579136"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:06:27.664852    8602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:06:27.673409    8602 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:06:27.673479    8602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 00:06:27.681833    8602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0717 00:06:27.699068    8602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:06:27.716505    8602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0717 00:06:27.733914    8602 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 00:06:27.737276    8602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:06:27.747138    8602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:06:27.824330    8602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:06:27.838989    8602 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136 for IP: 192.168.49.2
	I0717 00:06:27.839068    8602 certs.go:194] generating shared ca certs ...
	I0717 00:06:27.839107    8602 certs.go:226] acquiring lock for ca certs: {Name:mkd227790b4a676b68da1df63243d6b7540ab556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:27.839292    8602 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-2269/.minikube/ca.key
	I0717 00:06:29.026214    8602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-2269/.minikube/ca.crt ...
	I0717 00:06:29.026249    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/ca.crt: {Name:mk3ec6da30a15bb4ce3cdc12ce9f3da174fadba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:29.026462    8602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-2269/.minikube/ca.key ...
	I0717 00:06:29.026477    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/ca.key: {Name:mkc30886f597901886d2d4c317e10e44fcbf8c2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:29.026565    8602 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.key
	I0717 00:06:29.523380    8602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.crt ...
	I0717 00:06:29.523410    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.crt: {Name:mk53d37825bc9a371566524346a60efd23148742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:29.523581    8602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.key ...
	I0717 00:06:29.523597    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.key: {Name:mk5d59d9b144da98619c094f3ce5d5210ba2947f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:29.523664    8602 certs.go:256] generating profile certs ...
	I0717 00:06:29.523724    8602 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.key
	I0717 00:06:29.523741    8602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt with IP's: []
	I0717 00:06:30.053601    8602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt ...
	I0717 00:06:30.053638    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: {Name:mkc3473735b3e7e9a0c799c64478883e8d7fe68a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.053852    8602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.key ...
	I0717 00:06:30.053862    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.key: {Name:mkb555ad5b349a8106a3d863d415f8009d89f511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.053926    8602 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key.fdc13f68
	I0717 00:06:30.053941    8602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt.fdc13f68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0717 00:06:30.467817    8602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt.fdc13f68 ...
	I0717 00:06:30.467848    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt.fdc13f68: {Name:mk83c745853da1eccfdb14386e08c9a4fe32a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.468031    8602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key.fdc13f68 ...
	I0717 00:06:30.468045    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key.fdc13f68: {Name:mk39ed371af8bbd6214602955d59605f4575606b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.468124    8602 certs.go:381] copying /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt.fdc13f68 -> /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt
	I0717 00:06:30.468208    8602 certs.go:385] copying /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key.fdc13f68 -> /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key
	I0717 00:06:30.468267    8602 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.key
	I0717 00:06:30.468287    8602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.crt with IP's: []
	I0717 00:06:30.655315    8602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.crt ...
	I0717 00:06:30.655342    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.crt: {Name:mke72778136fdede5583f9c6d7fc9346ea22d347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.655509    8602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.key ...
	I0717 00:06:30.655520    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.key: {Name:mke6142835b5c3458149a3dbe00b0cf5d87082fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.655706    8602 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:06:30.655746    8602 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:06:30.655780    8602 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:06:30.655809    8602 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/key.pem (1679 bytes)
	I0717 00:06:30.656809    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:06:30.681949    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 00:06:30.705535    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:06:30.728299    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:06:30.751969    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 00:06:30.775188    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:06:30.798447    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:06:30.821855    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:06:30.845628    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:06:30.869557    8602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:06:30.886975    8602 ssh_runner.go:195] Run: openssl version
	I0717 00:06:30.892481    8602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:06:30.901974    8602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:06:30.905331    8602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:06:30.905432    8602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:06:30.912195    8602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:06:30.921010    8602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:06:30.924172    8602 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:06:30.924223    8602 kubeadm.go:392] StartCluster: {Name:addons-579136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-579136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:06:30.924303    8602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:06:30.924374    8602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:06:30.965911    8602 cri.go:89] found id: ""
	I0717 00:06:30.965995    8602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:06:30.975115    8602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:06:30.983989    8602 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0717 00:06:30.984056    8602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:06:30.992636    8602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:06:30.992654    8602 kubeadm.go:157] found existing configuration files:
	
	I0717 00:06:30.992704    8602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:06:31.001695    8602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:06:31.001811    8602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:06:31.017086    8602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:06:31.026103    8602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:06:31.026171    8602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:06:31.035583    8602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:06:31.045905    8602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:06:31.045973    8602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:06:31.055236    8602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:06:31.064518    8602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:06:31.064582    8602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:06:31.073157    8602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 00:06:31.120178    8602 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:06:31.120467    8602 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:06:31.162112    8602 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0717 00:06:31.162187    8602 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1064-aws
	I0717 00:06:31.162226    8602 kubeadm.go:310] OS: Linux
	I0717 00:06:31.162276    8602 kubeadm.go:310] CGROUPS_CPU: enabled
	I0717 00:06:31.162328    8602 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0717 00:06:31.162378    8602 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0717 00:06:31.162441    8602 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0717 00:06:31.162491    8602 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0717 00:06:31.162544    8602 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0717 00:06:31.162592    8602 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0717 00:06:31.162644    8602 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0717 00:06:31.162694    8602 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0717 00:06:31.225910    8602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:06:31.226019    8602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:06:31.226115    8602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:06:31.452427    8602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:06:31.456435    8602 out.go:204]   - Generating certificates and keys ...
	I0717 00:06:31.456564    8602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:06:31.456636    8602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:06:32.265600    8602 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:06:32.475865    8602 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:06:33.209980    8602 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:06:33.791398    8602 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:06:34.853611    8602 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:06:34.853922    8602 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-579136 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 00:06:35.223691    8602 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:06:35.224043    8602 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-579136 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 00:06:35.950199    8602 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:06:36.481470    8602 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:06:36.866085    8602 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:06:36.866381    8602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:06:37.288151    8602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:06:37.898653    8602 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:06:38.367882    8602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:06:39.066960    8602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:06:39.627960    8602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:06:39.628583    8602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:06:39.631314    8602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:06:39.633392    8602 out.go:204]   - Booting up control plane ...
	I0717 00:06:39.633500    8602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:06:39.633591    8602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:06:39.636083    8602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:06:39.646375    8602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:06:39.647288    8602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:06:39.647559    8602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:06:39.739021    8602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:06:39.739115    8602 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:06:41.240521    8602 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501349412s
	I0717 00:06:41.240605    8602 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:06:47.249078    8602 kubeadm.go:310] [api-check] The API server is healthy after 6.006736063s
	I0717 00:06:47.286812    8602 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:06:47.304756    8602 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:06:47.346084    8602 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:06:47.346298    8602 kubeadm.go:310] [mark-control-plane] Marking the node addons-579136 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:06:47.357648    8602 kubeadm.go:310] [bootstrap-token] Using token: svznq2.hqfvh980hynisrq7
	I0717 00:06:47.359735    8602 out.go:204]   - Configuring RBAC rules ...
	I0717 00:06:47.359875    8602 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:06:47.364737    8602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:06:47.371785    8602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:06:47.374977    8602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:06:47.379436    8602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:06:47.383380    8602 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:06:47.655186    8602 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:06:48.100875    8602 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:06:48.652987    8602 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:06:48.654076    8602 kubeadm.go:310] 
	I0717 00:06:48.654153    8602 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:06:48.654159    8602 kubeadm.go:310] 
	I0717 00:06:48.654233    8602 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:06:48.654238    8602 kubeadm.go:310] 
	I0717 00:06:48.654262    8602 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:06:48.654319    8602 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:06:48.654370    8602 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:06:48.654375    8602 kubeadm.go:310] 
	I0717 00:06:48.654426    8602 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:06:48.654431    8602 kubeadm.go:310] 
	I0717 00:06:48.654476    8602 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:06:48.654484    8602 kubeadm.go:310] 
	I0717 00:06:48.654535    8602 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:06:48.654606    8602 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:06:48.654671    8602 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:06:48.654676    8602 kubeadm.go:310] 
	I0717 00:06:48.654779    8602 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:06:48.654854    8602 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:06:48.654858    8602 kubeadm.go:310] 
	I0717 00:06:48.654939    8602 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token svznq2.hqfvh980hynisrq7 \
	I0717 00:06:48.655038    8602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:364e57b24df01cc43b6451b84edd589741e4028e3e02ff8d2cf1063ebd74c881 \
	I0717 00:06:48.655058    8602 kubeadm.go:310] 	--control-plane 
	I0717 00:06:48.655063    8602 kubeadm.go:310] 
	I0717 00:06:48.655144    8602 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:06:48.655148    8602 kubeadm.go:310] 
	I0717 00:06:48.655227    8602 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token svznq2.hqfvh980hynisrq7 \
	I0717 00:06:48.655330    8602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:364e57b24df01cc43b6451b84edd589741e4028e3e02ff8d2cf1063ebd74c881 
	I0717 00:06:48.657407    8602 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1064-aws\n", err: exit status 1
	I0717 00:06:48.657522    8602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:06:48.657545    8602 cni.go:84] Creating CNI manager for ""
	I0717 00:06:48.657556    8602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:06:48.659655    8602 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 00:06:48.661620    8602 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 00:06:48.665364    8602 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 00:06:48.665383    8602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 00:06:48.683457    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 00:06:48.932187    8602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:06:48.932320    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-579136 minikube.k8s.io/updated_at=2024_07_17T00_06_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=addons-579136 minikube.k8s.io/primary=true
	I0717 00:06:48.932351    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:48.945171    8602 ops.go:34] apiserver oom_adj: -16
	I0717 00:06:49.029893    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:49.530873    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:50.030026    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:50.530843    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:51.030435    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:51.530634    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:52.030959    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:52.530669    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:53.030015    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:53.530901    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:54.030936    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:54.530138    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:55.030684    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:55.530964    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:56.030313    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:56.530560    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:57.030137    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:57.530795    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:58.030897    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:58.530215    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:59.030449    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:59.530861    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:07:00.030903    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:07:00.530897    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:07:01.030660    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:07:01.530611    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:07:01.623571    8602 kubeadm.go:1113] duration metric: took 12.691302844s to wait for elevateKubeSystemPrivileges
	I0717 00:07:01.623604    8602 kubeadm.go:394] duration metric: took 30.699384745s to StartCluster
	I0717 00:07:01.623622    8602 settings.go:142] acquiring lock: {Name:mk883dff9b09cfe64fa59919f3a5dca1089afb6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:07:01.623739    8602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-2269/kubeconfig
	I0717 00:07:01.624179    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/kubeconfig: {Name:mk7d21bd0dadef6e1232ea2d159c34b00c02e88a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:07:01.624392    8602 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:07:01.624494    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:07:01.624745    8602 config.go:182] Loaded profile config "addons-579136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:07:01.624784    8602 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 00:07:01.624860    8602 addons.go:69] Setting yakd=true in profile "addons-579136"
	I0717 00:07:01.624885    8602 addons.go:234] Setting addon yakd=true in "addons-579136"
	I0717 00:07:01.624911    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.625370    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.625832    8602 addons.go:69] Setting cloud-spanner=true in profile "addons-579136"
	I0717 00:07:01.625872    8602 addons.go:234] Setting addon cloud-spanner=true in "addons-579136"
	I0717 00:07:01.625901    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.626356    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.626478    8602 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-579136"
	I0717 00:07:01.626503    8602 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-579136"
	I0717 00:07:01.626530    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.626799    8602 addons.go:69] Setting registry=true in profile "addons-579136"
	I0717 00:07:01.626826    8602 addons.go:234] Setting addon registry=true in "addons-579136"
	I0717 00:07:01.626850    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.626913    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.627329    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.629656    8602 addons.go:69] Setting storage-provisioner=true in profile "addons-579136"
	I0717 00:07:01.629695    8602 addons.go:234] Setting addon storage-provisioner=true in "addons-579136"
	I0717 00:07:01.629735    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.630140    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.630773    8602 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-579136"
	I0717 00:07:01.630833    8602 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-579136"
	I0717 00:07:01.630859    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.631253    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.637010    8602 addons.go:69] Setting default-storageclass=true in profile "addons-579136"
	I0717 00:07:01.637098    8602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-579136"
	I0717 00:07:01.637432    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.637758    8602 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-579136"
	I0717 00:07:01.637792    8602 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-579136"
	I0717 00:07:01.638134    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.648553    8602 addons.go:69] Setting volcano=true in profile "addons-579136"
	I0717 00:07:01.648652    8602 addons.go:234] Setting addon volcano=true in "addons-579136"
	I0717 00:07:01.648722    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.649202    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.649499    8602 addons.go:69] Setting gcp-auth=true in profile "addons-579136"
	I0717 00:07:01.649568    8602 mustload.go:65] Loading cluster: addons-579136
	I0717 00:07:01.649744    8602 config.go:182] Loaded profile config "addons-579136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:07:01.650005    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.681858    8602 addons.go:69] Setting ingress=true in profile "addons-579136"
	I0717 00:07:01.681991    8602 addons.go:234] Setting addon ingress=true in "addons-579136"
	I0717 00:07:01.682378    8602 addons.go:69] Setting volumesnapshots=true in profile "addons-579136"
	I0717 00:07:01.682463    8602 addons.go:234] Setting addon volumesnapshots=true in "addons-579136"
	I0717 00:07:01.682518    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.682329    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.688041    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.687447    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.732058    8602 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 00:07:01.734048    8602 addons.go:234] Setting addon default-storageclass=true in "addons-579136"
	I0717 00:07:01.734137    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.734689    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.687531    8602 out.go:177] * Verifying Kubernetes components...
	I0717 00:07:01.722844    8602 addons.go:69] Setting ingress-dns=true in profile "addons-579136"
	I0717 00:07:01.744445    8602 addons.go:234] Setting addon ingress-dns=true in "addons-579136"
	I0717 00:07:01.744534    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.745067    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.752477    8602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:07:01.722858    8602 addons.go:69] Setting inspektor-gadget=true in profile "addons-579136"
	I0717 00:07:01.756782    8602 addons.go:234] Setting addon inspektor-gadget=true in "addons-579136"
	I0717 00:07:01.756864    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.757915    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.775702    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.722865    8602 addons.go:69] Setting metrics-server=true in profile "addons-579136"
	I0717 00:07:01.790280    8602 addons.go:234] Setting addon metrics-server=true in "addons-579136"
	I0717 00:07:01.790347    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.790859    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.799780    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 00:07:01.803618    8602 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 00:07:01.743423    8602 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 00:07:01.744411    8602 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-579136"
	I0717 00:07:01.804040    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.804472    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.818106    8602 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 00:07:01.818190    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.826937    8602 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 00:07:01.833038    8602 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 00:07:01.836417    8602 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 00:07:01.836480    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 00:07:01.836594    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.846873    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 00:07:01.850890    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 00:07:01.855603    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 00:07:01.858931    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 00:07:01.861327    8602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 00:07:01.861453    8602 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:07:01.861471    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 00:07:01.861539    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.874962    8602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:07:01.861372    8602 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 00:07:01.877943    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	W0717 00:07:01.878204    8602 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 00:07:01.882252    8602 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 00:07:01.882273    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 00:07:01.882336    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.899383    8602 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:07:01.899666    8602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:07:01.903042    8602 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:07:01.903067    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 00:07:01.903134    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.903394    8602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:07:01.903425    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:07:01.903488    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.941967    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 00:07:01.944012    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 00:07:01.948618    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 00:07:01.948647    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 00:07:01.948729    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.973621    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 00:07:01.974566    8602 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:07:01.974588    8602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:07:01.974666    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.976858    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 00:07:01.976878    8602 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 00:07:01.976935    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:02.000511    8602 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 00:07:02.000713    8602 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 00:07:02.003759    8602 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 00:07:02.003783    8602 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 00:07:02.004038    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:02.004114    8602 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:07:02.004125    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 00:07:02.004278    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:02.021337    8602 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 00:07:02.023384    8602 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 00:07:02.023415    8602 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 00:07:02.023487    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:02.040543    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.041436    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.043359    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.044341    8602 out.go:177]   - Using image docker.io/busybox:stable
	I0717 00:07:02.046900    8602 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 00:07:02.050085    8602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:07:02.050108    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 00:07:02.050179    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:02.060880    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:07:02.142850    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.159249    8602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:07:02.172034    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.175383    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.175468    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.201914    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.212673    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.213515    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.216985    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.217853    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.223734    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.368547    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:07:02.488567    8602 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 00:07:02.488586    8602 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 00:07:02.491976    8602 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 00:07:02.491993    8602 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 00:07:02.580945    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 00:07:02.630444    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:07:02.636982    8602 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 00:07:02.637052    8602 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 00:07:02.641441    8602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 00:07:02.641497    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 00:07:02.645972    8602 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:07:02.646038    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 00:07:02.648905    8602 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 00:07:02.648959    8602 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 00:07:02.652030    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:07:02.657096    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:07:02.661875    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:07:02.679389    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:07:02.713394    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 00:07:02.713465    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 00:07:02.720533    8602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 00:07:02.720603    8602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 00:07:02.721360    8602 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 00:07:02.721416    8602 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 00:07:02.808185    8602 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 00:07:02.808259    8602 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 00:07:02.812366    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:07:02.847882    8602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 00:07:02.847959    8602 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 00:07:02.891717    8602 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 00:07:02.891786    8602 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 00:07:02.898020    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 00:07:02.898079    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 00:07:02.908561    8602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 00:07:02.908640    8602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 00:07:02.945142    8602 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:07:02.945211    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 00:07:02.957363    8602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:07:02.957386    8602 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 00:07:03.018535    8602 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 00:07:03.018608    8602 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 00:07:03.052616    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 00:07:03.052687    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 00:07:03.056820    8602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 00:07:03.056895    8602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 00:07:03.096683    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:07:03.109085    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:07:03.185848    8602 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 00:07:03.185936    8602 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 00:07:03.209359    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 00:07:03.209430    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 00:07:03.212571    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 00:07:03.212630    8602 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 00:07:03.247696    8602 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 00:07:03.247776    8602 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 00:07:03.325417    8602 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:07:03.325485    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 00:07:03.330175    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 00:07:03.330234    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 00:07:03.333374    8602 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:07:03.333439    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 00:07:03.411920    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:07:03.426187    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 00:07:03.426286    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 00:07:03.447309    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:07:03.471604    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 00:07:03.471674    8602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 00:07:03.547618    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 00:07:03.547690    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 00:07:03.586432    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 00:07:03.586502    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 00:07:03.653459    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:07:03.653527    8602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 00:07:03.720466    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:07:05.215869    8602 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.154946248s)
	I0717 00:07:05.215944    8602 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 00:07:05.216342    8602 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.057070314s)
	I0717 00:07:05.217969    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.849386025s)
	I0717 00:07:05.221805    8602 node_ready.go:35] waiting up to 6m0s for node "addons-579136" to be "Ready" ...
	I0717 00:07:05.857041    8602 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-579136" context rescaled to 1 replicas
	I0717 00:07:06.117696    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.536678906s)
	I0717 00:07:06.758470    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.127946616s)
	I0717 00:07:07.259476    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:08.295403    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.643291053s)
	I0717 00:07:08.295896    8602 addons.go:475] Verifying addon ingress=true in "addons-579136"
	I0717 00:07:08.295600    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.638442647s)
	I0717 00:07:08.295628    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.633688415s)
	I0717 00:07:08.295656    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.616205272s)
	I0717 00:07:08.295691    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.483268636s)
	I0717 00:07:08.296482    8602 addons.go:475] Verifying addon registry=true in "addons-579136"
	I0717 00:07:08.295741    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.198989154s)
	I0717 00:07:08.296801    8602 addons.go:475] Verifying addon metrics-server=true in "addons-579136"
	I0717 00:07:08.295771    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.18661712s)
	I0717 00:07:08.295825    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.883836952s)
	I0717 00:07:08.298661    8602 out.go:177] * Verifying ingress addon...
	I0717 00:07:08.300642    8602 out.go:177] * Verifying registry addon...
	I0717 00:07:08.300730    8602 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-579136 service yakd-dashboard -n yakd-dashboard
	
	I0717 00:07:08.303123    8602 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 00:07:08.304717    8602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 00:07:08.325588    8602 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:07:08.325667    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:08.326485    8602 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 00:07:08.326529    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.343410    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.896010019s)
	W0717 00:07:08.343511    8602 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:07:08.343554    8602 retry.go:31] will retry after 366.531691ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	W0717 00:07:08.346481    8602 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0717 00:07:08.693196    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.972624246s)
	I0717 00:07:08.693274    8602 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-579136"
	I0717 00:07:08.697284    8602 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 00:07:08.700232    8602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 00:07:08.707781    8602 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:07:08.707848    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:08.711125    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:07:08.810834    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.821054    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:09.204932    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:09.351612    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:09.352469    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.707972    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:09.725086    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:09.807115    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.809714    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:10.162087    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.450874475s)
	I0717 00:07:10.207302    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:10.308814    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:10.309609    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:10.705631    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:10.810742    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:10.815720    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:10.866689    8602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 00:07:10.866806    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:10.902823    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:11.032849    8602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 00:07:11.061737    8602 addons.go:234] Setting addon gcp-auth=true in "addons-579136"
	I0717 00:07:11.061790    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:11.062215    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:11.089779    8602 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 00:07:11.089845    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:11.119669    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:11.204710    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:11.221194    8602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:07:11.222691    8602 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 00:07:11.224423    8602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 00:07:11.224449    8602 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 00:07:11.253749    8602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 00:07:11.253775    8602 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 00:07:11.292932    8602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:07:11.292961    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 00:07:11.309042    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:11.312704    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:11.324108    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:07:11.705084    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:11.726297    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:11.809841    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:11.811144    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:12.062790    8602 addons.go:475] Verifying addon gcp-auth=true in "addons-579136"
	I0717 00:07:12.065081    8602 out.go:177] * Verifying gcp-auth addon...
	I0717 00:07:12.068292    8602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 00:07:12.087167    8602 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 00:07:12.087194    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:12.204755    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:12.309345    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:12.311432    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:12.572346    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:12.704291    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:12.813422    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:12.815402    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:13.073192    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:13.205812    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:13.308675    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:13.309316    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:13.571667    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:13.704307    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:13.808785    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:13.811555    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:14.073503    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:14.205525    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:14.226272    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:14.308019    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:14.309029    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:14.572679    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:14.705128    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:14.809673    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:14.810054    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:15.072356    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:15.204994    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:15.308674    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:15.309079    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:15.572200    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:15.704828    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:15.807945    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:15.810181    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:16.072350    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:16.205139    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:16.308486    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:16.309452    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:16.572175    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:16.705231    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:16.725864    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:16.808246    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:16.808654    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:17.072237    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:17.204224    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:17.307608    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:17.308669    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:17.571821    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:17.705386    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:17.807483    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:17.808486    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:18.073659    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:18.204688    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:18.308373    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:18.309802    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:18.571551    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:18.704098    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:18.807176    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:18.812872    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:19.072433    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:19.204505    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:19.225078    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:19.306860    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:19.309374    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:19.573474    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:19.756562    8602 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:07:19.756588    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:19.761429    8602 node_ready.go:49] node "addons-579136" has status "Ready":"True"
	I0717 00:07:19.761454    8602 node_ready.go:38] duration metric: took 14.539589335s for node "addons-579136" to be "Ready" ...
	I0717 00:07:19.761465    8602 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:07:19.816478    8602 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p58r6" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:19.870425    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:19.878836    8602 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:07:19.878862    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:20.072324    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:20.209882    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:20.318533    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:20.336550    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:20.573016    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:20.706998    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:20.810479    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:20.811442    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:21.072594    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:21.206445    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:21.308395    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:21.310431    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:21.571310    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:21.706534    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:21.809490    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:21.810111    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:21.823413    8602 pod_ready.go:92] pod "coredns-7db6d8ff4d-p58r6" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:21.823436    8602 pod_ready.go:81] duration metric: took 2.006918796s for pod "coredns-7db6d8ff4d-p58r6" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.823460    8602 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.830728    8602 pod_ready.go:92] pod "etcd-addons-579136" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:21.830813    8602 pod_ready.go:81] duration metric: took 7.34477ms for pod "etcd-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.830844    8602 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.840242    8602 pod_ready.go:92] pod "kube-apiserver-addons-579136" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:21.840306    8602 pod_ready.go:81] duration metric: took 9.440382ms for pod "kube-apiserver-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.840335    8602 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.845704    8602 pod_ready.go:92] pod "kube-controller-manager-addons-579136" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:21.845775    8602 pod_ready.go:81] duration metric: took 5.41064ms for pod "kube-controller-manager-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.845805    8602 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b7z7h" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.861543    8602 pod_ready.go:92] pod "kube-proxy-b7z7h" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:21.861616    8602 pod_ready.go:81] duration metric: took 15.783777ms for pod "kube-proxy-b7z7h" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.861644    8602 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:22.072692    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:22.206276    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:22.220724    8602 pod_ready.go:92] pod "kube-scheduler-addons-579136" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:22.220749    8602 pod_ready.go:81] duration metric: took 359.083854ms for pod "kube-scheduler-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:22.220790    8602 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:22.311144    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:22.312446    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:22.573794    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:22.707006    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:22.809028    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:22.809862    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:23.072237    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:23.206142    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:23.308646    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:23.309553    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:23.571887    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:23.706331    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:23.811090    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:23.822558    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:24.073950    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:24.207015    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:24.228139    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:24.311237    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:24.317661    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:24.572739    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:24.707226    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:24.812479    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:24.813326    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:25.073274    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:25.205857    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:25.309517    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:25.310732    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:25.572678    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:25.706160    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:25.807676    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:25.811309    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:26.073658    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:26.206421    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:26.232156    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:26.309089    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:26.314325    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:26.575254    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:26.708164    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:26.816680    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:26.818396    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:27.071724    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:27.206673    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:27.311620    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:27.312738    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:27.572860    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:27.706482    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:27.809178    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:27.814128    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:28.071695    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:28.205880    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:28.308024    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:28.310697    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:28.580159    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:28.705863    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:28.727380    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:28.807383    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:28.809973    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:29.072043    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:29.206724    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:29.307525    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:29.309886    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:29.572555    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:29.706284    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:29.809038    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:29.810396    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:30.073383    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:30.206618    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:30.309073    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:30.316497    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:30.572442    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:30.719103    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:30.731005    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:30.808955    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:30.810136    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:31.073163    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:31.206726    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:31.309840    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:31.310973    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:31.572397    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:31.706893    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:31.808390    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:31.818378    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:32.072247    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:32.206239    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:32.311236    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:32.312734    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:32.571943    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:32.707055    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:32.811211    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:32.814504    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:33.072735    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:33.210996    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:33.234500    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:33.310297    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:33.311202    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:33.571988    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:33.707235    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:33.816193    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:33.817805    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:34.073610    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:34.205796    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:34.308532    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:34.310682    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:34.577447    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:34.707658    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:34.809018    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:34.811869    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:35.072580    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:35.206713    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:35.307543    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:35.310438    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:35.573084    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:35.706174    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:35.729593    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:35.806993    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:35.809959    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:36.072728    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:36.205508    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:36.308743    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:36.309580    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:36.572156    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:36.705811    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:36.808966    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:36.810422    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:37.073418    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:37.209013    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:37.310675    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:37.319840    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:37.573754    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:37.707669    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:37.730800    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:37.812162    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:37.813061    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:38.071899    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:38.207787    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:38.311429    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:38.313438    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:38.574064    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:38.706950    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:38.810238    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:38.813488    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:39.087960    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:39.212538    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:39.314296    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:39.315905    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:39.577637    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:39.707991    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:39.822112    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:39.826129    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:40.083845    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:40.215608    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:40.237607    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:40.321668    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:40.322867    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:40.572273    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:40.722556    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:40.814632    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:40.815499    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:41.074935    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:41.207910    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:41.310482    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:41.317966    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:41.572395    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:41.707156    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:41.808221    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:41.810496    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:42.072643    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:42.207878    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:42.308767    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:42.318164    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:42.572409    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:42.708168    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:42.743242    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:42.818568    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:42.819540    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:43.073080    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:43.208020    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:43.314591    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:43.315995    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:43.572254    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:43.708885    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:43.811100    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:43.814181    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:44.072340    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:44.209086    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:44.310441    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:44.319873    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:44.573999    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:44.706844    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:44.809352    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:44.815560    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:45.075321    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:45.208058    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:45.228798    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:45.318558    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:45.319977    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:45.573169    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:45.705829    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:45.820515    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:45.821941    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:46.072922    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:46.207245    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:46.309309    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:46.315037    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:46.575736    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:46.709955    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:46.813988    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:46.816092    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:47.076340    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:47.208747    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:47.236993    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:47.311391    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:47.315799    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:47.572783    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:47.709658    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:47.812710    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:47.814614    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:48.072955    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:48.206365    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:48.309476    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:48.312610    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:48.577214    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:48.705408    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:48.809771    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:48.810606    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:49.072239    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:49.206031    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:49.311321    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:49.312354    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:49.571967    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:49.706041    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:49.726416    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:49.808374    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:49.809655    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:50.074233    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:50.208359    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:50.307388    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:50.311012    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:50.571948    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:50.706855    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:50.807612    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:50.811211    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:51.071690    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:51.206261    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:51.308339    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:51.309793    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:51.572690    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:51.706287    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:51.727382    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:51.807867    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:51.810109    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:52.072112    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:52.205913    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:52.312295    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:52.313652    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:52.572783    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:52.730422    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:52.810213    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:52.813250    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:53.074908    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:53.207644    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:53.311714    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:53.315963    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:53.581520    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:53.708269    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:53.734065    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:53.811584    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:53.812392    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:54.072939    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:54.209246    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:54.314361    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:54.321234    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:54.573156    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:54.707936    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:54.822725    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:54.824528    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:55.072642    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:55.206313    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:55.308189    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:55.310162    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:55.572886    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:55.706381    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:55.809977    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:55.810348    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:56.073813    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:56.247249    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:56.248357    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:56.315215    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:56.316100    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:56.576661    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:56.705786    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:56.811037    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:56.818626    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:57.073188    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:57.216290    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:57.310854    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:57.326454    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:57.572221    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:57.707543    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:57.820117    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:57.824058    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:58.072659    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:58.206637    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:58.320487    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:58.323682    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:58.572336    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:58.713483    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:58.732443    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:58.812450    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:58.812650    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:59.072538    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:59.213120    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:59.308917    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:59.311003    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:59.571677    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:59.705623    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:59.809531    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:59.810240    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:00.081379    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:00.212345    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:00.309400    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:00.312484    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:00.572146    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:00.711897    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:00.811680    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:00.812320    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:01.073140    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:01.207057    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:01.230541    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:01.311226    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:01.311569    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:01.571804    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:01.706159    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:01.807750    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:01.810644    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:02.072748    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:02.206066    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:02.310014    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:02.311938    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:02.575222    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:02.705435    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:02.808915    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:02.811823    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:03.073004    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:03.206464    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:03.233906    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:03.308748    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:03.315453    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:03.572185    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:03.706367    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:03.810365    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:03.811312    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:04.072253    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:04.206025    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:04.309204    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:04.312263    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:04.571838    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:04.707080    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:04.811827    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:04.820579    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:05.073369    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:05.215031    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:05.315133    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:05.333813    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:05.572967    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:05.706244    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:05.727170    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:05.808039    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:05.812555    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:06.072595    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:06.209573    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:06.312832    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:06.314613    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:06.572986    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:06.711079    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:06.811346    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:06.814082    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:07.073130    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:07.207956    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:07.321290    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:07.322900    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:07.572584    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:07.706392    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:07.729626    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:07.808341    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:07.815862    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:08.073147    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:08.208273    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:08.311007    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:08.314519    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:08.572603    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:08.706263    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:08.810947    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:08.813704    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:09.072817    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:09.206416    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:09.307827    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:09.308892    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:09.571542    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:09.706174    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:09.808227    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:09.812497    8602 kapi.go:107] duration metric: took 1m1.507773473s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 00:08:10.073058    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:10.207915    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:10.231579    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:10.309510    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:10.579797    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:10.705910    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:10.809786    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:11.072448    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:11.205848    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:11.307344    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:11.572281    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:11.705674    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:11.808086    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:12.071842    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:12.209352    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:12.309119    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:12.572907    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:12.707694    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:12.737869    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:12.810409    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:13.072297    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:13.206645    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:13.308175    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:13.571525    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:13.705578    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:13.807408    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:14.071843    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:14.206704    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:14.307465    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:14.583629    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:14.705929    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:14.808912    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:15.073093    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:15.207690    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:15.229370    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:15.307619    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:15.572104    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:15.706515    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:15.807416    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:16.071877    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:16.206218    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:16.307800    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:16.573258    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:16.707345    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:16.808948    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:17.073036    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:17.237232    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:17.262188    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:17.324491    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:17.574624    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:17.705842    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:17.807655    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:18.072572    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:18.208084    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:18.307912    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:18.572400    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:18.711148    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:18.807669    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:19.072828    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:19.207116    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:19.312764    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:19.572543    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:19.708488    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:19.727563    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:19.807790    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:20.072560    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:20.206145    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:20.307771    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:20.572152    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:20.705555    8602 kapi.go:107] duration metric: took 1m12.005318341s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 00:08:20.807852    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:21.072564    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:21.308490    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:21.571951    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:21.808981    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:22.072308    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:22.228140    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:22.307405    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:22.572740    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:22.807398    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:23.071900    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:23.307729    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:23.572229    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:23.807730    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:24.072336    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:24.307587    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:24.571959    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:24.727098    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:24.807611    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:25.072214    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:25.309712    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:25.572450    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:25.808898    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:26.072961    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:26.308538    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:26.572325    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:26.727981    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:26.809063    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:27.073580    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:27.308816    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:27.574507    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:27.809501    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:28.072323    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:28.308260    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:28.571915    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:28.808558    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:29.073146    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:29.227021    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:29.309100    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:29.571902    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:29.808090    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:30.081192    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:30.309711    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:30.572935    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:30.819128    8602 kapi.go:107] duration metric: took 1m22.51600148s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 00:08:31.073851    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:31.227307    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:31.571985    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:32.073639    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:32.573124    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:33.071686    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:33.571536    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:33.726865    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:34.072360    8602 kapi.go:107] duration metric: took 1m22.004067887s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 00:08:34.074173    8602 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-579136 cluster.
	I0717 00:08:34.075741    8602 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 00:08:34.077408    8602 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 00:08:34.079303    8602 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 00:08:34.080681    8602 addons.go:510] duration metric: took 1m32.45589318s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 00:08:35.727168    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:38.227275    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:40.228413    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:42.726951    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:43.226927    8602 pod_ready.go:92] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"True"
	I0717 00:08:43.226949    8602 pod_ready.go:81] duration metric: took 1m21.006143148s for pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace to be "Ready" ...
	I0717 00:08:43.226961    8602 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-r42hf" in "kube-system" namespace to be "Ready" ...
	I0717 00:08:43.231997    8602 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-r42hf" in "kube-system" namespace has status "Ready":"True"
	I0717 00:08:43.232021    8602 pod_ready.go:81] duration metric: took 5.053134ms for pod "nvidia-device-plugin-daemonset-r42hf" in "kube-system" namespace to be "Ready" ...
	I0717 00:08:43.232042    8602 pod_ready.go:38] duration metric: took 1m23.470563589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:08:43.232770    8602 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:08:43.233453    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:08:43.233524    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:08:43.282604    8602 cri.go:89] found id: "84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:43.282625    8602 cri.go:89] found id: ""
	I0717 00:08:43.282632    8602 logs.go:276] 1 containers: [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1]
	I0717 00:08:43.282688    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.287152    8602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:08:43.287234    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:08:43.326744    8602 cri.go:89] found id: "be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:43.326800    8602 cri.go:89] found id: ""
	I0717 00:08:43.326809    8602 logs.go:276] 1 containers: [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39]
	I0717 00:08:43.326867    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.330556    8602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:08:43.330632    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:08:43.368363    8602 cri.go:89] found id: "33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:43.368385    8602 cri.go:89] found id: ""
	I0717 00:08:43.368394    8602 logs.go:276] 1 containers: [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21]
	I0717 00:08:43.368459    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.371960    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:08:43.372028    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:08:43.411423    8602 cri.go:89] found id: "eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:43.411446    8602 cri.go:89] found id: ""
	I0717 00:08:43.411454    8602 logs.go:276] 1 containers: [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd]
	I0717 00:08:43.411511    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.415141    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:08:43.415211    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:08:43.457442    8602 cri.go:89] found id: "f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:43.457460    8602 cri.go:89] found id: ""
	I0717 00:08:43.457469    8602 logs.go:276] 1 containers: [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a]
	I0717 00:08:43.457522    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.460894    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:08:43.460982    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:08:43.505085    8602 cri.go:89] found id: "8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:43.505107    8602 cri.go:89] found id: ""
	I0717 00:08:43.505115    8602 logs.go:276] 1 containers: [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883]
	I0717 00:08:43.505169    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.508801    8602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:08:43.508869    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:08:43.548185    8602 cri.go:89] found id: "8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:43.548247    8602 cri.go:89] found id: ""
	I0717 00:08:43.548269    8602 logs.go:276] 1 containers: [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e]
	I0717 00:08:43.548339    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.551874    8602 logs.go:123] Gathering logs for kube-controller-manager [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883] ...
	I0717 00:08:43.551897    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:43.625867    8602 logs.go:123] Gathering logs for container status ...
	I0717 00:08:43.625898    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:08:43.705960    8602 logs.go:123] Gathering logs for kube-apiserver [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1] ...
	I0717 00:08:43.706000    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:43.777756    8602 logs.go:123] Gathering logs for etcd [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39] ...
	I0717 00:08:43.777786    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:43.828700    8602 logs.go:123] Gathering logs for coredns [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21] ...
	I0717 00:08:43.828734    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:43.878952    8602 logs.go:123] Gathering logs for kube-proxy [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a] ...
	I0717 00:08:43.879046    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:43.918154    8602 logs.go:123] Gathering logs for kindnet [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e] ...
	I0717 00:08:43.918181    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:43.976888    8602 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:08:43.976925    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:08:44.084641    8602 logs.go:123] Gathering logs for kubelet ...
	I0717 00:08:44.084678    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:08:44.182562    8602 logs.go:123] Gathering logs for dmesg ...
	I0717 00:08:44.182597    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:08:44.196547    8602 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:08:44.196574    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:08:44.363438    8602 logs.go:123] Gathering logs for kube-scheduler [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd] ...
	I0717 00:08:44.363465    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:46.918243    8602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:08:46.931654    8602 api_server.go:72] duration metric: took 1m45.30722339s to wait for apiserver process to appear ...
	I0717 00:08:46.931682    8602 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:08:46.931716    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:08:46.931776    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:08:46.971787    8602 cri.go:89] found id: "84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:46.971812    8602 cri.go:89] found id: ""
	I0717 00:08:46.971821    8602 logs.go:276] 1 containers: [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1]
	I0717 00:08:46.971876    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:46.975560    8602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:08:46.975668    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:08:47.015622    8602 cri.go:89] found id: "be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:47.015642    8602 cri.go:89] found id: ""
	I0717 00:08:47.015650    8602 logs.go:276] 1 containers: [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39]
	I0717 00:08:47.015705    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.019106    8602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:08:47.019178    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:08:47.057979    8602 cri.go:89] found id: "33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:47.058002    8602 cri.go:89] found id: ""
	I0717 00:08:47.058010    8602 logs.go:276] 1 containers: [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21]
	I0717 00:08:47.058066    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.061727    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:08:47.061843    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:08:47.107057    8602 cri.go:89] found id: "eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:47.107080    8602 cri.go:89] found id: ""
	I0717 00:08:47.107089    8602 logs.go:276] 1 containers: [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd]
	I0717 00:08:47.107148    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.110574    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:08:47.110641    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:08:47.151779    8602 cri.go:89] found id: "f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:47.151810    8602 cri.go:89] found id: ""
	I0717 00:08:47.151819    8602 logs.go:276] 1 containers: [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a]
	I0717 00:08:47.151872    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.155501    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:08:47.155619    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:08:47.196072    8602 cri.go:89] found id: "8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:47.196095    8602 cri.go:89] found id: ""
	I0717 00:08:47.196103    8602 logs.go:276] 1 containers: [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883]
	I0717 00:08:47.196158    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.199729    8602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:08:47.199799    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:08:47.239714    8602 cri.go:89] found id: "8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:47.239737    8602 cri.go:89] found id: ""
	I0717 00:08:47.239745    8602 logs.go:276] 1 containers: [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e]
	I0717 00:08:47.239816    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.243303    8602 logs.go:123] Gathering logs for kubelet ...
	I0717 00:08:47.243330    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:08:47.335189    8602 logs.go:123] Gathering logs for dmesg ...
	I0717 00:08:47.335226    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:08:47.348041    8602 logs.go:123] Gathering logs for kube-scheduler [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd] ...
	I0717 00:08:47.348076    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:47.403600    8602 logs.go:123] Gathering logs for kindnet [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e] ...
	I0717 00:08:47.403635    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:47.449666    8602 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:08:47.449696    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:08:47.548623    8602 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:08:47.548657    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:08:47.684078    8602 logs.go:123] Gathering logs for kube-apiserver [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1] ...
	I0717 00:08:47.684106    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:47.749877    8602 logs.go:123] Gathering logs for etcd [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39] ...
	I0717 00:08:47.749909    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:47.805556    8602 logs.go:123] Gathering logs for coredns [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21] ...
	I0717 00:08:47.805587    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:47.849455    8602 logs.go:123] Gathering logs for kube-proxy [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a] ...
	I0717 00:08:47.849483    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:47.885552    8602 logs.go:123] Gathering logs for kube-controller-manager [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883] ...
	I0717 00:08:47.885625    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:47.958289    8602 logs.go:123] Gathering logs for container status ...
	I0717 00:08:47.958324    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:08:50.512129    8602 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 00:08:50.521633    8602 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 00:08:50.522658    8602 api_server.go:141] control plane version: v1.30.2
	I0717 00:08:50.522689    8602 api_server.go:131] duration metric: took 3.590999725s to wait for apiserver health ...
	I0717 00:08:50.522698    8602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:08:50.522730    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:08:50.522820    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:08:50.561758    8602 cri.go:89] found id: "84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:50.561780    8602 cri.go:89] found id: ""
	I0717 00:08:50.561788    8602 logs.go:276] 1 containers: [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1]
	I0717 00:08:50.561845    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.565306    8602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:08:50.565379    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:08:50.605124    8602 cri.go:89] found id: "be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:50.605194    8602 cri.go:89] found id: ""
	I0717 00:08:50.605220    8602 logs.go:276] 1 containers: [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39]
	I0717 00:08:50.605301    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.608800    8602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:08:50.608870    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:08:50.646243    8602 cri.go:89] found id: "33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:50.646267    8602 cri.go:89] found id: ""
	I0717 00:08:50.646275    8602 logs.go:276] 1 containers: [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21]
	I0717 00:08:50.646329    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.649736    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:08:50.649803    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:08:50.688242    8602 cri.go:89] found id: "eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:50.688264    8602 cri.go:89] found id: ""
	I0717 00:08:50.688273    8602 logs.go:276] 1 containers: [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd]
	I0717 00:08:50.688328    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.691874    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:08:50.691994    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:08:50.729695    8602 cri.go:89] found id: "f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:50.729717    8602 cri.go:89] found id: ""
	I0717 00:08:50.729724    8602 logs.go:276] 1 containers: [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a]
	I0717 00:08:50.729789    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.733191    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:08:50.733258    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:08:50.782851    8602 cri.go:89] found id: "8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:50.782872    8602 cri.go:89] found id: ""
	I0717 00:08:50.782880    8602 logs.go:276] 1 containers: [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883]
	I0717 00:08:50.782934    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.786841    8602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:08:50.786911    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:08:50.824534    8602 cri.go:89] found id: "8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:50.824605    8602 cri.go:89] found id: ""
	I0717 00:08:50.824621    8602 logs.go:276] 1 containers: [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e]
	I0717 00:08:50.824688    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.828069    8602 logs.go:123] Gathering logs for kube-apiserver [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1] ...
	I0717 00:08:50.828172    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:50.898105    8602 logs.go:123] Gathering logs for coredns [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21] ...
	I0717 00:08:50.898140    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:50.941620    8602 logs.go:123] Gathering logs for kube-proxy [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a] ...
	I0717 00:08:50.941645    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:50.979078    8602 logs.go:123] Gathering logs for container status ...
	I0717 00:08:50.979104    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:08:51.023447    8602 logs.go:123] Gathering logs for dmesg ...
	I0717 00:08:51.023478    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:08:51.035793    8602 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:08:51.035819    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:08:51.176202    8602 logs.go:123] Gathering logs for etcd [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39] ...
	I0717 00:08:51.176233    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:51.224401    8602 logs.go:123] Gathering logs for kube-scheduler [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd] ...
	I0717 00:08:51.224432    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:51.270906    8602 logs.go:123] Gathering logs for kube-controller-manager [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883] ...
	I0717 00:08:51.270941    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:51.360049    8602 logs.go:123] Gathering logs for kindnet [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e] ...
	I0717 00:08:51.360084    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:51.405552    8602 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:08:51.405585    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:08:51.497152    8602 logs.go:123] Gathering logs for kubelet ...
	I0717 00:08:51.497189    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:08:54.103936    8602 system_pods.go:59] 18 kube-system pods found
	I0717 00:08:54.103976    8602 system_pods.go:61] "coredns-7db6d8ff4d-p58r6" [95609ac0-378e-4169-a21c-a18fd2036b08] Running
	I0717 00:08:54.103984    8602 system_pods.go:61] "csi-hostpath-attacher-0" [65e41986-2d2f-471c-8d19-a5620abf95b6] Running
	I0717 00:08:54.103990    8602 system_pods.go:61] "csi-hostpath-resizer-0" [3e88069a-e672-4a74-87bc-e1a71f52778a] Running
	I0717 00:08:54.103995    8602 system_pods.go:61] "csi-hostpathplugin-xkhk8" [d88c0ab7-3d63-42dd-a9b2-c2192513a989] Running
	I0717 00:08:54.104000    8602 system_pods.go:61] "etcd-addons-579136" [78bbc9c5-4bcb-4d26-9573-566c4362c019] Running
	I0717 00:08:54.104006    8602 system_pods.go:61] "kindnet-nv8dn" [5596281d-4baf-4082-b2ad-fe1547266b35] Running
	I0717 00:08:54.104010    8602 system_pods.go:61] "kube-apiserver-addons-579136" [e11fdf68-2d03-4ce4-b284-22039a659cf1] Running
	I0717 00:08:54.104014    8602 system_pods.go:61] "kube-controller-manager-addons-579136" [030d9096-10c3-47c9-aeab-5c8edde94b8d] Running
	I0717 00:08:54.104019    8602 system_pods.go:61] "kube-ingress-dns-minikube" [71c59a88-c4ab-4c05-9b67-191701cdb616] Running
	I0717 00:08:54.104023    8602 system_pods.go:61] "kube-proxy-b7z7h" [40503070-a17c-4e76-9aaf-c1157a0270ad] Running
	I0717 00:08:54.104028    8602 system_pods.go:61] "kube-scheduler-addons-579136" [178f3333-59ee-41b7-8f2b-bb2da614cbfe] Running
	I0717 00:08:54.104035    8602 system_pods.go:61] "metrics-server-c59844bb4-hqndr" [fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62] Running
	I0717 00:08:54.104040    8602 system_pods.go:61] "nvidia-device-plugin-daemonset-r42hf" [3055b027-1414-4787-9a8e-0a95e312c842] Running
	I0717 00:08:54.104047    8602 system_pods.go:61] "registry-9j5kz" [98b03ce8-0e8f-459e-860d-ffeebb54febc] Running
	I0717 00:08:54.104052    8602 system_pods.go:61] "registry-proxy-qckjd" [7a2c2cd9-4d11-44d7-a720-59b20cb1e5c7] Running
	I0717 00:08:54.104056    8602 system_pods.go:61] "snapshot-controller-745499f584-gvc85" [f2399c6b-60ad-432b-bb19-c442a6da83fc] Running
	I0717 00:08:54.104061    8602 system_pods.go:61] "snapshot-controller-745499f584-j5445" [8bcc60b5-f2c2-4b0d-97d6-0527fed9e203] Running
	I0717 00:08:54.104065    8602 system_pods.go:61] "storage-provisioner" [e06986b3-ed58-46c2-8c17-4b63bd9656e6] Running
	I0717 00:08:54.104072    8602 system_pods.go:74] duration metric: took 3.581367726s to wait for pod list to return data ...
	I0717 00:08:54.104088    8602 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:08:54.106731    8602 default_sa.go:45] found service account: "default"
	I0717 00:08:54.106777    8602 default_sa.go:55] duration metric: took 2.662289ms for default service account to be created ...
	I0717 00:08:54.106790    8602 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:08:54.117049    8602 system_pods.go:86] 18 kube-system pods found
	I0717 00:08:54.117082    8602 system_pods.go:89] "coredns-7db6d8ff4d-p58r6" [95609ac0-378e-4169-a21c-a18fd2036b08] Running
	I0717 00:08:54.117090    8602 system_pods.go:89] "csi-hostpath-attacher-0" [65e41986-2d2f-471c-8d19-a5620abf95b6] Running
	I0717 00:08:54.117095    8602 system_pods.go:89] "csi-hostpath-resizer-0" [3e88069a-e672-4a74-87bc-e1a71f52778a] Running
	I0717 00:08:54.117100    8602 system_pods.go:89] "csi-hostpathplugin-xkhk8" [d88c0ab7-3d63-42dd-a9b2-c2192513a989] Running
	I0717 00:08:54.117104    8602 system_pods.go:89] "etcd-addons-579136" [78bbc9c5-4bcb-4d26-9573-566c4362c019] Running
	I0717 00:08:54.117108    8602 system_pods.go:89] "kindnet-nv8dn" [5596281d-4baf-4082-b2ad-fe1547266b35] Running
	I0717 00:08:54.117113    8602 system_pods.go:89] "kube-apiserver-addons-579136" [e11fdf68-2d03-4ce4-b284-22039a659cf1] Running
	I0717 00:08:54.117118    8602 system_pods.go:89] "kube-controller-manager-addons-579136" [030d9096-10c3-47c9-aeab-5c8edde94b8d] Running
	I0717 00:08:54.117121    8602 system_pods.go:89] "kube-ingress-dns-minikube" [71c59a88-c4ab-4c05-9b67-191701cdb616] Running
	I0717 00:08:54.117126    8602 system_pods.go:89] "kube-proxy-b7z7h" [40503070-a17c-4e76-9aaf-c1157a0270ad] Running
	I0717 00:08:54.117130    8602 system_pods.go:89] "kube-scheduler-addons-579136" [178f3333-59ee-41b7-8f2b-bb2da614cbfe] Running
	I0717 00:08:54.117135    8602 system_pods.go:89] "metrics-server-c59844bb4-hqndr" [fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62] Running
	I0717 00:08:54.117141    8602 system_pods.go:89] "nvidia-device-plugin-daemonset-r42hf" [3055b027-1414-4787-9a8e-0a95e312c842] Running
	I0717 00:08:54.117145    8602 system_pods.go:89] "registry-9j5kz" [98b03ce8-0e8f-459e-860d-ffeebb54febc] Running
	I0717 00:08:54.117159    8602 system_pods.go:89] "registry-proxy-qckjd" [7a2c2cd9-4d11-44d7-a720-59b20cb1e5c7] Running
	I0717 00:08:54.117164    8602 system_pods.go:89] "snapshot-controller-745499f584-gvc85" [f2399c6b-60ad-432b-bb19-c442a6da83fc] Running
	I0717 00:08:54.117168    8602 system_pods.go:89] "snapshot-controller-745499f584-j5445" [8bcc60b5-f2c2-4b0d-97d6-0527fed9e203] Running
	I0717 00:08:54.117175    8602 system_pods.go:89] "storage-provisioner" [e06986b3-ed58-46c2-8c17-4b63bd9656e6] Running
	I0717 00:08:54.117191    8602 system_pods.go:126] duration metric: took 10.395642ms to wait for k8s-apps to be running ...
	I0717 00:08:54.117199    8602 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:08:54.117258    8602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:08:54.129931    8602 system_svc.go:56] duration metric: took 12.723135ms WaitForService to wait for kubelet
	I0717 00:08:54.129963    8602 kubeadm.go:582] duration metric: took 1m52.505539059s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:08:54.129984    8602 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:08:54.133092    8602 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 00:08:54.133127    8602 node_conditions.go:123] node cpu capacity is 2
	I0717 00:08:54.133139    8602 node_conditions.go:105] duration metric: took 3.150518ms to run NodePressure ...
	I0717 00:08:54.133153    8602 start.go:241] waiting for startup goroutines ...
	I0717 00:08:54.133160    8602 start.go:246] waiting for cluster config update ...
	I0717 00:08:54.133181    8602 start.go:255] writing updated cluster config ...
	I0717 00:08:54.133483    8602 ssh_runner.go:195] Run: rm -f paused
	I0717 00:08:54.455301    8602 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:08:54.459586    8602 out.go:177] * Done! kubectl is now configured to use "addons-579136" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.454412239Z" level=info msg="Removed container 50b8172017038b30dd4d66d98764922312b52c15e1c5738928b86e91045cca32: ingress-nginx/ingress-nginx-admission-create-8r57q/create" id=5d0eee0f-f819-46bc-9e22-2885b15ac305 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.456017229Z" level=info msg="Stopping pod sandbox: 797bbfc019a5790fa2f47a68b832e18956bac1ad0c2cc5a4e9822cb5e6400a20" id=b15d59b0-c6bc-44ea-98c5-50651cec674a name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.456232572Z" level=info msg="Stopped pod sandbox (already stopped): 797bbfc019a5790fa2f47a68b832e18956bac1ad0c2cc5a4e9822cb5e6400a20" id=b15d59b0-c6bc-44ea-98c5-50651cec674a name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.456531217Z" level=info msg="Removing pod sandbox: 797bbfc019a5790fa2f47a68b832e18956bac1ad0c2cc5a4e9822cb5e6400a20" id=f2658efa-64f8-43ae-8c25-3730f6daa8da name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.464770089Z" level=info msg="Removed pod sandbox: 797bbfc019a5790fa2f47a68b832e18956bac1ad0c2cc5a4e9822cb5e6400a20" id=f2658efa-64f8-43ae-8c25-3730f6daa8da name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.465271243Z" level=info msg="Stopping pod sandbox: bbe975babd63c80bd49e04507410e35e5033fae8c183e0821ffcb53c01f01798" id=1d732f0b-305d-4941-abf2-2b7e5e3abb69 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.465310441Z" level=info msg="Stopped pod sandbox (already stopped): bbe975babd63c80bd49e04507410e35e5033fae8c183e0821ffcb53c01f01798" id=1d732f0b-305d-4941-abf2-2b7e5e3abb69 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.465644623Z" level=info msg="Removing pod sandbox: bbe975babd63c80bd49e04507410e35e5033fae8c183e0821ffcb53c01f01798" id=21bb41d3-e0ca-4615-8623-d50b61dce856 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.474353625Z" level=info msg="Removed pod sandbox: bbe975babd63c80bd49e04507410e35e5033fae8c183e0821ffcb53c01f01798" id=21bb41d3-e0ca-4615-8623-d50b61dce856 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.474878469Z" level=info msg="Stopping pod sandbox: b2177e92f7aa014badfbb161839316142dacdbd76b35f980f8705981130bebec" id=80bc0ff2-2c5d-49c7-9cc8-37a1d3da2e97 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.474912497Z" level=info msg="Stopped pod sandbox (already stopped): b2177e92f7aa014badfbb161839316142dacdbd76b35f980f8705981130bebec" id=80bc0ff2-2c5d-49c7-9cc8-37a1d3da2e97 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.475253219Z" level=info msg="Removing pod sandbox: b2177e92f7aa014badfbb161839316142dacdbd76b35f980f8705981130bebec" id=1cb6111e-dc77-4552-9a90-3cbc409fa16a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.483271373Z" level=info msg="Removed pod sandbox: b2177e92f7aa014badfbb161839316142dacdbd76b35f980f8705981130bebec" id=1cb6111e-dc77-4552-9a90-3cbc409fa16a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.531979335Z" level=info msg="Stopped container 7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a: ingress-nginx/ingress-nginx-controller-768f948f8f-99g7h/controller" id=c4a4bca5-38d5-46ee-be1e-802d0769da7f name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.532519876Z" level=info msg="Stopping pod sandbox: 7d488a0d17d123f28486a990d183d1eecd0d08eb14eccda2bd773561855eba90" id=e6d08a30-106a-48f2-92d2-4adeb0765747 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.536237985Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-M7IQKSKMTBP5O5UC - [0:0]\n:KUBE-HP-XGOT7NAVQAOHKVVK - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-XGOT7NAVQAOHKVVK\n-X KUBE-HP-M7IQKSKMTBP5O5UC\nCOMMIT\n"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.550701247Z" level=info msg="Closing host port tcp:80"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.550755075Z" level=info msg="Closing host port tcp:443"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.552484844Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.552511807Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.552683087Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-768f948f8f-99g7h Namespace:ingress-nginx ID:7d488a0d17d123f28486a990d183d1eecd0d08eb14eccda2bd773561855eba90 UID:d651a115-e23d-4adf-9987-ebb248d4c190 NetNS:/var/run/netns/0a02538d-6f41-4f09-9921-40e1f2643736 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.552828701Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-768f948f8f-99g7h from CNI network \"kindnet\" (type=ptp)"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.582642003Z" level=info msg="Stopped pod sandbox: 7d488a0d17d123f28486a990d183d1eecd0d08eb14eccda2bd773561855eba90" id=e6d08a30-106a-48f2-92d2-4adeb0765747 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.686651751Z" level=info msg="Removing container: 7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a" id=453580dc-d513-4c81-bc7d-e82340fd2435 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.702699391Z" level=info msg="Removed container 7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a: ingress-nginx/ingress-nginx-controller-768f948f8f-99g7h/controller" id=453580dc-d513-4c81-bc7d-e82340fd2435 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ab1d5f3067a11       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   9 seconds ago       Running             hello-world-app           0                   4c6bd0d427be0       hello-world-app-6778b5fc9f-kxhh8
	a4c547da00f93       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         2 minutes ago       Running             nginx                     0                   bcf86b42b257d       nginx
	c62bfa850a7fb       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   3 minutes ago       Running             headlamp                  0                   efb56325b9000       headlamp-7867546754-fq62p
	b571fbe936b7c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            4 minutes ago       Running             gcp-auth                  0                   d2bde78635b59       gcp-auth-5db96cd9b4-hth2l
	48ac120f6ea13       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                         4 minutes ago       Running             yakd                      0                   b9e2bf010791f       yakd-dashboard-799879c74f-r64g4
	2dc85b71327dd       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   5 minutes ago       Running             metrics-server            0                   5efc11cb2e3a2       metrics-server-c59844bb4-hqndr
	d05ed15daf27e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        5 minutes ago       Running             storage-provisioner       0                   faae55f65a262       storage-provisioner
	33f540e04476e       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        5 minutes ago       Running             coredns                   0                   be3266b7f5913       coredns-7db6d8ff4d-p58r6
	8c23c06f4e4c8       docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493                      5 minutes ago       Running             kindnet-cni               0                   81856d966fb7a       kindnet-nv8dn
	f597b2cda48f6       66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae                                                        5 minutes ago       Running             kube-proxy                0                   7d8ae7d2389b3       kube-proxy-b7z7h
	eec7d7d9059cb       c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5                                                        6 minutes ago       Running             kube-scheduler            0                   ffa811be62a9a       kube-scheduler-addons-579136
	8428206521ac5       e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567                                                        6 minutes ago       Running             kube-controller-manager   0                   3f6cd66e774e4       kube-controller-manager-addons-579136
	be2ec7b44daeb       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        6 minutes ago       Running             etcd                      0                   828c8d42490e0       etcd-addons-579136
	84aa8590a287f       84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0                                                        6 minutes ago       Running             kube-apiserver            0                   e652878d1fc0c       kube-apiserver-addons-579136
	
	
	==> coredns [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21] <==
	[INFO] 10.244.0.12:42933 - 3590 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002674955s
	[INFO] 10.244.0.12:42665 - 6894 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011142s
	[INFO] 10.244.0.12:42665 - 40939 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143077s
	[INFO] 10.244.0.12:40110 - 40228 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000106703s
	[INFO] 10.244.0.12:40110 - 40225 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000157535s
	[INFO] 10.244.0.12:52982 - 38654 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059219s
	[INFO] 10.244.0.12:52982 - 51699 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050094s
	[INFO] 10.244.0.12:35738 - 18054 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004362s
	[INFO] 10.244.0.12:35738 - 30852 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061499s
	[INFO] 10.244.0.12:43813 - 25020 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001345363s
	[INFO] 10.244.0.12:43813 - 35234 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001404713s
	[INFO] 10.244.0.12:33019 - 13528 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000075259s
	[INFO] 10.244.0.12:33019 - 24542 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000054353s
	[INFO] 10.244.0.20:50705 - 41399 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002272195s
	[INFO] 10.244.0.20:36043 - 64044 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002464242s
	[INFO] 10.244.0.20:45241 - 58361 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140107s
	[INFO] 10.244.0.20:49194 - 19126 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000065298s
	[INFO] 10.244.0.20:51582 - 45119 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114309s
	[INFO] 10.244.0.20:54636 - 34517 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111568s
	[INFO] 10.244.0.20:39637 - 1710 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004920222s
	[INFO] 10.244.0.20:54995 - 11120 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005614685s
	[INFO] 10.244.0.20:49724 - 41375 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001574409s
	[INFO] 10.244.0.20:53984 - 53524 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000822492s
	[INFO] 10.244.0.22:40141 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000188117s
	[INFO] 10.244.0.22:40456 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000099597s
	
	
	==> describe nodes <==
	Name:               addons-579136
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-579136
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=addons-579136
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_06_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-579136
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:06:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-579136
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:12:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:10:53 +0000   Wed, 17 Jul 2024 00:06:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:10:53 +0000   Wed, 17 Jul 2024 00:06:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:10:53 +0000   Wed, 17 Jul 2024 00:06:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:10:53 +0000   Wed, 17 Jul 2024 00:07:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-579136
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 95f4effa184244a7a48b8cd1f484601f
	  System UUID:                0dff8d19-ba0c-4a39-bf5a-66328e26eb1a
	  Boot ID:                    a28e50e2-5a2a-4346-aa05-4284fb20291b
	  Kernel Version:             5.15.0-1064-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-kxhh8         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  gcp-auth                    gcp-auth-5db96cd9b4-hth2l                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  headlamp                    headlamp-7867546754-fq62p                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 coredns-7db6d8ff4d-p58r6                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m50s
	  kube-system                 etcd-addons-579136                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m5s
	  kube-system                 kindnet-nv8dn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m51s
	  kube-system                 kube-apiserver-addons-579136             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-controller-manager-addons-579136    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-proxy-b7z7h                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-scheduler-addons-579136             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 metrics-server-c59844bb4-hqndr           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m47s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  yakd-dashboard              yakd-dashboard-799879c74f-r64g4          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m45s  kube-proxy       
	  Normal  Starting                 6m6s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m5s   kubelet          Node addons-579136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s   kubelet          Node addons-579136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s   kubelet          Node addons-579136 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m52s  node-controller  Node addons-579136 event: Registered Node addons-579136 in Controller
	  Normal  NodeReady                5m34s  kubelet          Node addons-579136 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul16 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015067] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.476271] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.061374] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002702] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017886] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004642] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003674] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.659967] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.278169] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39] <==
	{"level":"info","ts":"2024-07-17T00:06:42.943081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-17T00:06:42.94313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-17T00:06:42.943164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-17T00:06:42.946464Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:06:42.951018Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-579136 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T00:06:42.951196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:06:42.952964Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-17T00:06:42.955122Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:06:42.955375Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:06:42.955453Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:06:42.955148Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:06:42.963855Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T00:06:42.963941Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T00:06:42.99533Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-17T00:07:02.423627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.245989ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128030573269489984 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kube-proxy-669fc44fbc\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kube-proxy-669fc44fbc\" value_size:2056 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T00:07:02.42609Z","caller":"traceutil/trace.go:171","msg":"trace[191558539] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"134.887529ms","start":"2024-07-17T00:07:02.29117Z","end":"2024-07-17T00:07:02.426057Z","steps":["trace[191558539] 'process raft request'  (duration: 31.721331ms)","trace[191558539] 'compare'  (duration: 100.109821ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:07:02.426708Z","caller":"traceutil/trace.go:171","msg":"trace[933581596] linearizableReadLoop","detail":"{readStateIndex:337; appliedIndex:334; }","duration":"101.300716ms","start":"2024-07-17T00:07:02.325396Z","end":"2024-07-17T00:07:02.426697Z","steps":["trace[933581596] 'read index received'  (duration: 22.641252ms)","trace[933581596] 'applied index is now lower than readState.Index'  (duration: 78.658668ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:07:02.427594Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.548239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-node-lease/\" range_end:\"/registry/serviceaccounts/kube-node-lease0\" ","response":"range_response_count:1 size:187"}
	{"level":"info","ts":"2024-07-17T00:07:02.430647Z","caller":"traceutil/trace.go:171","msg":"trace[172502781] range","detail":"{range_begin:/registry/serviceaccounts/kube-node-lease/; range_end:/registry/serviceaccounts/kube-node-lease0; response_count:1; response_revision:327; }","duration":"105.24439ms","start":"2024-07-17T00:07:02.325392Z","end":"2024-07-17T00:07:02.430636Z","steps":["trace[172502781] 'agreement among raft nodes before linearized reading'  (duration: 101.496168ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:02.430455Z","caller":"traceutil/trace.go:171","msg":"trace[1608852876] transaction","detail":"{read_only:false; response_revision:326; number_of_response:1; }","duration":"138.984159ms","start":"2024-07-17T00:07:02.291457Z","end":"2024-07-17T00:07:02.430441Z","steps":["trace[1608852876] 'process raft request'  (duration: 132.473821ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:02.430562Z","caller":"traceutil/trace.go:171","msg":"trace[722345133] transaction","detail":"{read_only:false; response_revision:327; number_of_response:1; }","duration":"107.836974ms","start":"2024-07-17T00:07:02.322719Z","end":"2024-07-17T00:07:02.430556Z","steps":["trace[722345133] 'process raft request'  (duration: 101.297819ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:02.533235Z","caller":"traceutil/trace.go:171","msg":"trace[1596833142] transaction","detail":"{read_only:false; response_revision:328; number_of_response:1; }","duration":"106.533223ms","start":"2024-07-17T00:07:02.426646Z","end":"2024-07-17T00:07:02.533179Z","steps":["trace[1596833142] 'process raft request'  (duration: 75.375101ms)","trace[1596833142] 'store kv pair into bolt db' {req_type:put; key:/registry/pods/kube-system/coredns-7db6d8ff4d-t4wpd; req_size:3502; } (duration: 14.051443ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:07:03.760442Z","caller":"traceutil/trace.go:171","msg":"trace[1737087953] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"132.01645ms","start":"2024-07-17T00:07:03.628408Z","end":"2024-07-17T00:07:03.760424Z","steps":["trace[1737087953] 'process raft request'  (duration: 100.641828ms)","trace[1737087953] 'compare'  (duration: 30.951963ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:07:03.792217Z","caller":"traceutil/trace.go:171","msg":"trace[1036976757] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"163.691068ms","start":"2024-07-17T00:07:03.628511Z","end":"2024-07-17T00:07:03.792202Z","steps":["trace[1036976757] 'process raft request'  (duration: 131.5916ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:05.587144Z","caller":"traceutil/trace.go:171","msg":"trace[517915260] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"142.107397ms","start":"2024-07-17T00:07:05.445007Z","end":"2024-07-17T00:07:05.587114Z","steps":["trace[517915260] 'process raft request'  (duration: 20.16383ms)"],"step_count":1}
	
	
	==> gcp-auth [b571fbe936b7c1b1b4b4b7a5bcd9aa9d2260e3bb0f8c27d9cf00054905a69996] <==
	2024/07/17 00:08:32 GCP Auth Webhook started!
	2024/07/17 00:08:55 Ready to marshal response ...
	2024/07/17 00:08:55 Ready to write response ...
	2024/07/17 00:08:55 Ready to marshal response ...
	2024/07/17 00:08:55 Ready to write response ...
	2024/07/17 00:08:55 Ready to marshal response ...
	2024/07/17 00:08:55 Ready to write response ...
	2024/07/17 00:09:06 Ready to marshal response ...
	2024/07/17 00:09:06 Ready to write response ...
	2024/07/17 00:09:13 Ready to marshal response ...
	2024/07/17 00:09:13 Ready to write response ...
	2024/07/17 00:09:13 Ready to marshal response ...
	2024/07/17 00:09:13 Ready to write response ...
	2024/07/17 00:09:23 Ready to marshal response ...
	2024/07/17 00:09:23 Ready to write response ...
	2024/07/17 00:09:35 Ready to marshal response ...
	2024/07/17 00:09:35 Ready to write response ...
	2024/07/17 00:10:06 Ready to marshal response ...
	2024/07/17 00:10:06 Ready to write response ...
	2024/07/17 00:10:23 Ready to marshal response ...
	2024/07/17 00:10:23 Ready to write response ...
	2024/07/17 00:12:42 Ready to marshal response ...
	2024/07/17 00:12:42 Ready to write response ...
	
	
	==> kernel <==
	 00:12:53 up 55 min,  0 users,  load average: 0.30, 0.65, 0.41
	Linux addons-579136 5.15.0-1064-aws #70~20.04.1-Ubuntu SMP Thu Jun 27 14:52:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e] <==
	I0717 00:11:38.817721       1 main.go:303] handling current node
	I0717 00:11:48.817585       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:11:48.817616       1 main.go:303] handling current node
	W0717 00:11:55.017313       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0717 00:11:55.017345       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0717 00:11:58.817058       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:11:58.817096       1 main.go:303] handling current node
	I0717 00:12:08.817062       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:12:08.817104       1 main.go:303] handling current node
	W0717 00:12:11.144340       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:12:11.144496       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0717 00:12:18.817512       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:12:18.817554       1 main.go:303] handling current node
	W0717 00:12:21.101432       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:12:21.101580       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0717 00:12:28.817710       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:12:28.817746       1 main.go:303] handling current node
	I0717 00:12:38.817640       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:12:38.817674       1 main.go:303] handling current node
	W0717 00:12:40.117441       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0717 00:12:40.117478       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0717 00:12:44.630820       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:12:44.630852       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0717 00:12:48.817633       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:12:48.817669       1 main.go:303] handling current node
	
	
	==> kube-apiserver [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1] <==
	E0717 00:08:43.002287       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.101.247:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.101.247:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.101.247:443: connect: connection refused
	E0717 00:08:43.007748       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.101.247:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.101.247:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.101.247:443: connect: connection refused
	I0717 00:08:43.094329       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 00:08:55.363172       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.233.242"}
	E0717 00:09:24.728314       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:09:24.744065       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:09:24.761047       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:09:39.766954       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0717 00:09:46.780971       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0717 00:10:13.606286       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 00:10:14.648273       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 00:10:22.986395       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:10:22.986545       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:10:23.021995       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:10:23.022133       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:10:23.046350       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:10:23.046404       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:10:23.072418       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:10:23.072539       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:10:23.577872       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 00:10:23.909234       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.216.76"}
	W0717 00:10:24.023449       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 00:10:24.072683       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 00:10:24.089857       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 00:12:43.179745       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.15.92"}
	
	
	==> kube-controller-manager [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883] <==
	W0717 00:11:34.965884       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:11:34.965921       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:11:38.321716       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:11:38.321753       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:11:48.376974       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:11:48.377013       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:12.294716       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:12.294778       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:12.851042       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:12.851078       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:14.475593       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:14.475630       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:34.279684       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:34.279725       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 00:12:42.970575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="44.097075ms"
	I0717 00:12:42.983664       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="12.98973ms"
	I0717 00:12:42.983818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="39.296µs"
	I0717 00:12:43.003247       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="47.821µs"
	I0717 00:12:44.722233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="8.247824ms"
	I0717 00:12:44.722476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="35.308µs"
	I0717 00:12:45.364338       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0717 00:12:45.367235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="5.013µs"
	I0717 00:12:45.373493       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0717 00:12:49.146701       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:49.146861       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a] <==
	I0717 00:07:06.995959       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:07:07.175033       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0717 00:07:07.795969       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0717 00:07:07.796015       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:07:08.039048       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0717 00:07:08.048365       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0717 00:07:08.048494       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:07:08.048789       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:07:08.113845       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:07:08.136020       1 config.go:192] "Starting service config controller"
	I0717 00:07:08.136124       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:07:08.142893       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:07:08.142911       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:07:08.143414       1 config.go:319] "Starting node config controller"
	I0717 00:07:08.143432       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:07:08.250914       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:07:08.251518       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:07:08.251581       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd] <==
	W0717 00:06:45.708771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:06:45.709044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 00:06:45.709086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:06:45.709122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:06:45.708810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:06:45.709164       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:06:45.708918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:06:45.709182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:06:45.708930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:06:45.709198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:06:45.708940       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:06:45.709236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:06:45.709025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:06:45.709260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:06:45.711069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:06:45.711148       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:06:46.666813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:06:46.666852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:06:46.725572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:06:46.725607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:06:46.740520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:06:46.740560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:06:46.850187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:06:46.850296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0717 00:06:47.378323       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:12:43 addons-579136 kubelet[1563]: I0717 00:12:43.088445    1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvcqt\" (UniqueName: \"kubernetes.io/projected/443de100-3a3d-4d9b-b8aa-7b1e9154c91f-kube-api-access-wvcqt\") pod \"hello-world-app-6778b5fc9f-kxhh8\" (UID: \"443de100-3a3d-4d9b-b8aa-7b1e9154c91f\") " pod="default/hello-world-app-6778b5fc9f-kxhh8"
	Jul 17 00:12:44 addons-579136 kubelet[1563]: I0717 00:12:44.301540    1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9c6w\" (UniqueName: \"kubernetes.io/projected/71c59a88-c4ab-4c05-9b67-191701cdb616-kube-api-access-w9c6w\") pod \"71c59a88-c4ab-4c05-9b67-191701cdb616\" (UID: \"71c59a88-c4ab-4c05-9b67-191701cdb616\") "
	Jul 17 00:12:44 addons-579136 kubelet[1563]: I0717 00:12:44.314541    1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c59a88-c4ab-4c05-9b67-191701cdb616-kube-api-access-w9c6w" (OuterVolumeSpecName: "kube-api-access-w9c6w") pod "71c59a88-c4ab-4c05-9b67-191701cdb616" (UID: "71c59a88-c4ab-4c05-9b67-191701cdb616"). InnerVolumeSpecName "kube-api-access-w9c6w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:12:44 addons-579136 kubelet[1563]: I0717 00:12:44.402368    1563 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w9c6w\" (UniqueName: \"kubernetes.io/projected/71c59a88-c4ab-4c05-9b67-191701cdb616-kube-api-access-w9c6w\") on node \"addons-579136\" DevicePath \"\""
	Jul 17 00:12:44 addons-579136 kubelet[1563]: I0717 00:12:44.673393    1563 scope.go:117] "RemoveContainer" containerID="564f21d8ae856d418c1b6c6995373607e37166985a71c2caa84e0564b1b8c411"
	Jul 17 00:12:44 addons-579136 kubelet[1563]: I0717 00:12:44.695699    1563 scope.go:117] "RemoveContainer" containerID="564f21d8ae856d418c1b6c6995373607e37166985a71c2caa84e0564b1b8c411"
	Jul 17 00:12:44 addons-579136 kubelet[1563]: E0717 00:12:44.696071    1563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"564f21d8ae856d418c1b6c6995373607e37166985a71c2caa84e0564b1b8c411\": container with ID starting with 564f21d8ae856d418c1b6c6995373607e37166985a71c2caa84e0564b1b8c411 not found: ID does not exist" containerID="564f21d8ae856d418c1b6c6995373607e37166985a71c2caa84e0564b1b8c411"
	Jul 17 00:12:44 addons-579136 kubelet[1563]: I0717 00:12:44.696108    1563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"564f21d8ae856d418c1b6c6995373607e37166985a71c2caa84e0564b1b8c411"} err="failed to get container status \"564f21d8ae856d418c1b6c6995373607e37166985a71c2caa84e0564b1b8c411\": rpc error: code = NotFound desc = could not find container \"564f21d8ae856d418c1b6c6995373607e37166985a71c2caa84e0564b1b8c411\": container with ID starting with 564f21d8ae856d418c1b6c6995373607e37166985a71c2caa84e0564b1b8c411 not found: ID does not exist"
	Jul 17 00:12:45 addons-579136 kubelet[1563]: I0717 00:12:45.377701    1563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-kxhh8" podStartSLOduration=2.313477269 podStartE2EDuration="3.377681184s" podCreationTimestamp="2024-07-17 00:12:42 +0000 UTC" firstStartedPulling="2024-07-17 00:12:43.351098635 +0000 UTC m=+355.491725662" lastFinishedPulling="2024-07-17 00:12:44.415302541 +0000 UTC m=+356.555929577" observedRunningTime="2024-07-17 00:12:44.713326181 +0000 UTC m=+356.853953217" watchObservedRunningTime="2024-07-17 00:12:45.377681184 +0000 UTC m=+357.518308286"
	Jul 17 00:12:45 addons-579136 kubelet[1563]: I0717 00:12:45.968567    1563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6e3e74-3ff5-4590-8185-7a80720b7edb" path="/var/lib/kubelet/pods/0f6e3e74-3ff5-4590-8185-7a80720b7edb/volumes"
	Jul 17 00:12:45 addons-579136 kubelet[1563]: I0717 00:12:45.969592    1563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50903e80-fe30-403b-b7f7-42b2770f1b4b" path="/var/lib/kubelet/pods/50903e80-fe30-403b-b7f7-42b2770f1b4b/volumes"
	Jul 17 00:12:45 addons-579136 kubelet[1563]: I0717 00:12:45.969971    1563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c59a88-c4ab-4c05-9b67-191701cdb616" path="/var/lib/kubelet/pods/71c59a88-c4ab-4c05-9b67-191701cdb616/volumes"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.417369    1563 scope.go:117] "RemoveContainer" containerID="8acd64cce55666d050db2eb97fc00d47366514ad6329ab239f7e82593fd76969"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.436017    1563 scope.go:117] "RemoveContainer" containerID="50b8172017038b30dd4d66d98764922312b52c15e1c5738928b86e91045cca32"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.634607    1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d45pq\" (UniqueName: \"kubernetes.io/projected/d651a115-e23d-4adf-9987-ebb248d4c190-kube-api-access-d45pq\") pod \"d651a115-e23d-4adf-9987-ebb248d4c190\" (UID: \"d651a115-e23d-4adf-9987-ebb248d4c190\") "
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.634660    1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d651a115-e23d-4adf-9987-ebb248d4c190-webhook-cert\") pod \"d651a115-e23d-4adf-9987-ebb248d4c190\" (UID: \"d651a115-e23d-4adf-9987-ebb248d4c190\") "
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.637035    1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d651a115-e23d-4adf-9987-ebb248d4c190-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d651a115-e23d-4adf-9987-ebb248d4c190" (UID: "d651a115-e23d-4adf-9987-ebb248d4c190"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.638129    1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d651a115-e23d-4adf-9987-ebb248d4c190-kube-api-access-d45pq" (OuterVolumeSpecName: "kube-api-access-d45pq") pod "d651a115-e23d-4adf-9987-ebb248d4c190" (UID: "d651a115-e23d-4adf-9987-ebb248d4c190"). InnerVolumeSpecName "kube-api-access-d45pq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.684970    1563 scope.go:117] "RemoveContainer" containerID="7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.703056    1563 scope.go:117] "RemoveContainer" containerID="7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: E0717 00:12:48.703409    1563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a\": container with ID starting with 7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a not found: ID does not exist" containerID="7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.703446    1563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a"} err="failed to get container status \"7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a\": rpc error: code = NotFound desc = could not find container \"7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a\": container with ID starting with 7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a not found: ID does not exist"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.734956    1563 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d651a115-e23d-4adf-9987-ebb248d4c190-webhook-cert\") on node \"addons-579136\" DevicePath \"\""
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.734996    1563 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-d45pq\" (UniqueName: \"kubernetes.io/projected/d651a115-e23d-4adf-9987-ebb248d4c190-kube-api-access-d45pq\") on node \"addons-579136\" DevicePath \"\""
	Jul 17 00:12:49 addons-579136 kubelet[1563]: I0717 00:12:49.969266    1563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d651a115-e23d-4adf-9987-ebb248d4c190" path="/var/lib/kubelet/pods/d651a115-e23d-4adf-9987-ebb248d4c190/volumes"
	
	
	==> storage-provisioner [d05ed15daf27e6eb7314cfda07fd569bdde4ce60019af246e67c6daaa3837031] <==
	I0717 00:07:20.535428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:07:20.551620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:07:20.551746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:07:20.562941       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:07:20.563170       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-579136_db04466c-585f-43a5-b0b8-e24129537628!
	I0717 00:07:20.566292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af356d7e-57de-482b-87c5-ff66524bddce", APIVersion:"v1", ResourceVersion:"886", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-579136_db04466c-585f-43a5-b0b8-e24129537628 became leader
	I0717 00:07:20.664545       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-579136_db04466c-585f-43a5-b0b8-e24129537628!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-579136 -n addons-579136
helpers_test.go:261: (dbg) Run:  kubectl --context addons-579136 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (360.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 7.230433ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-hqndr" [fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004528777s
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (113.762981ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 3m21.041313848s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (122.554771ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 3m24.092092972s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (97.687475ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 3m27.566521504s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (92.328149ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 3m34.224733432s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (86.370497ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 3m40.386030909s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (90.515162ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 3m54.293549506s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (85.458347ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 4m5.85738021s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (86.806886ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 4m32.27988751s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (96.720566ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 5m5.101190743s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (88.387396ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 6m21.600833553s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (85.053392ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 7m43.742166407s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (84.12673ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 8m14.750589185s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-579136 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-579136 top pods -n kube-system: exit status 1 (81.174877ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p58r6, age: 9m13.267988618s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-579136
helpers_test.go:235: (dbg) docker inspect addons-579136:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb",
	        "Created": "2024-07-17T00:06:23.189615279Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 9108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-17T00:06:23.373233785Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5d45b1ffc93797449a214b942992f529b6d45b715f4913615d5e219890c79f90",
	        "ResolvConfPath": "/var/lib/docker/containers/f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb/hosts",
	        "LogPath": "/var/lib/docker/containers/f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb/f1485c08a8695db0847071d96b873c7b61e1965eedb95fe1665b7c2d3eb027bb-json.log",
	        "Name": "/addons-579136",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-579136:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-579136",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0a17a3986b00272011e0c74417d8b3b617230b0669e07894e467b83be32ef441-init/diff:/var/lib/docker/overlay2/5c52293bfcd82276da29849f51cb4ed3256d8b703926adbd5037ecfe280e85a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a17a3986b00272011e0c74417d8b3b617230b0669e07894e467b83be32ef441/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a17a3986b00272011e0c74417d8b3b617230b0669e07894e467b83be32ef441/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a17a3986b00272011e0c74417d8b3b617230b0669e07894e467b83be32ef441/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-579136",
	                "Source": "/var/lib/docker/volumes/addons-579136/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-579136",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-579136",
	                "name.minikube.sigs.k8s.io": "addons-579136",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb2eaf5e9e030d11de0eae75a2dc66af4c77a867b0feba85b943efc0aaa088b4",
	            "SandboxKey": "/var/run/docker/netns/eb2eaf5e9e03",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-579136": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c31bae7560a521b93c9ce69126042ea843fa0cebab7232bb7f7d9703da7242e2",
	                    "EndpointID": "0113948c6d5f2dfed84340b0671028f59fb49f2adec5be142955ca0ec216f61a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-579136",
	                        "f1485c08a869"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-579136 -n addons-579136
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-579136 logs -n 25: (1.483783989s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-439568                                                                     | download-only-439568   | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| delete  | -p download-only-915936                                                                     | download-only-915936   | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| delete  | -p download-only-740132                                                                     | download-only-740132   | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| start   | --download-only -p                                                                          | download-docker-852192 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | download-docker-852192                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-852192                                                                   | download-docker-852192 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-257090   | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | binary-mirror-257090                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46489                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-257090                                                                     | binary-mirror-257090   | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| addons  | enable dashboard -p                                                                         | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | addons-579136                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | addons-579136                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-579136 --wait=true                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | -p addons-579136                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-579136 ip                                                                            | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	| addons  | addons-579136 addons disable                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | -p addons-579136                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | addons-579136                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-579136 ssh cat                                                                       | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | /opt/local-path-provisioner/pvc-79ea8134-8c2c-4ed1-b98e-1d5f361ebf2b_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-579136 addons disable                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:10 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	|         | addons-579136                                                                               |                        |         |         |                     |                     |
	| addons  | addons-579136 addons                                                                        | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-579136 addons                                                                        | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-579136 ssh curl -s                                                                   | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-579136 ip                                                                            | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:12 UTC | 17 Jul 24 00:12 UTC |
	| addons  | addons-579136 addons disable                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:12 UTC | 17 Jul 24 00:12 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-579136 addons disable                                                                | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:12 UTC | 17 Jul 24 00:12 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-579136 addons                                                                        | addons-579136          | jenkins | v1.33.1 | 17 Jul 24 00:16 UTC | 17 Jul 24 00:16 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:05:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:05:58.798832    8602 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:05:58.798944    8602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:05:58.798954    8602 out.go:304] Setting ErrFile to fd 2...
	I0717 00:05:58.798959    8602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:05:58.799289    8602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	I0717 00:05:58.800069    8602 out.go:298] Setting JSON to false
	I0717 00:05:58.800875    8602 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2911,"bootTime":1721171848,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 00:05:58.800974    8602 start.go:139] virtualization:  
	I0717 00:05:58.804728    8602 out.go:177] * [addons-579136] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 00:05:58.806836    8602 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:05:58.806902    8602 notify.go:220] Checking for updates...
	I0717 00:05:58.810855    8602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:05:58.812546    8602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	I0717 00:05:58.814432    8602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	I0717 00:05:58.816316    8602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 00:05:58.818209    8602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:05:58.820515    8602 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:05:58.846437    8602 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:05:58.846541    8602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:05:58.909059    8602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 00:05:58.900255911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:05:58.909170    8602 docker.go:307] overlay module found
	I0717 00:05:58.911333    8602 out.go:177] * Using the docker driver based on user configuration
	I0717 00:05:58.913158    8602 start.go:297] selected driver: docker
	I0717 00:05:58.913194    8602 start.go:901] validating driver "docker" against <nil>
	I0717 00:05:58.913212    8602 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:05:58.913819    8602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:05:58.973387    8602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 00:05:58.964065604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:05:58.973556    8602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:05:58.973811    8602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:05:58.975930    8602 out.go:177] * Using Docker driver with root privileges
	I0717 00:05:58.977705    8602 cni.go:84] Creating CNI manager for ""
	I0717 00:05:58.977724    8602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:05:58.977735    8602 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:05:58.977826    8602 start.go:340] cluster config:
	{Name:addons-579136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-579136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:05:58.979749    8602 out.go:177] * Starting "addons-579136" primary control-plane node in "addons-579136" cluster
	I0717 00:05:58.981988    8602 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 00:05:58.983817    8602 out.go:177] * Pulling base image v0.0.44-1721064868-19249 ...
	I0717 00:05:58.985675    8602 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:05:58.985728    8602 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0717 00:05:58.985752    8602 cache.go:56] Caching tarball of preloaded images
	I0717 00:05:58.985761    8602 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local docker daemon
	I0717 00:05:58.985830    8602 preload.go:172] Found /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0717 00:05:58.985840    8602 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:05:58.986180    8602 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/config.json ...
	I0717 00:05:58.986207    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/config.json: {Name:mk77004d90b416030051575e931c0c894a38ccf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:59.003973    8602 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:05:59.004108    8602 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory
	I0717 00:05:59.004136    8602 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory, skipping pull
	I0717 00:05:59.004150    8602 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c exists in cache, skipping pull
	I0717 00:05:59.004158    8602 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c as a tarball
	I0717 00:05:59.004164    8602 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c from local cache
	I0717 00:06:15.839576    8602 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c from cached tarball
	I0717 00:06:15.839614    8602 cache.go:194] Successfully downloaded all kic artifacts
	I0717 00:06:15.839651    8602 start.go:360] acquireMachinesLock for addons-579136: {Name:mkdf103692deb65e932cffd7ff6c86e49eeb0190 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:06:15.839761    8602 start.go:364] duration metric: took 86.78µs to acquireMachinesLock for "addons-579136"
	I0717 00:06:15.839804    8602 start.go:93] Provisioning new machine with config: &{Name:addons-579136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-579136 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:06:15.839891    8602 start.go:125] createHost starting for "" (driver="docker")
	I0717 00:06:15.842438    8602 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 00:06:15.842714    8602 start.go:159] libmachine.API.Create for "addons-579136" (driver="docker")
	I0717 00:06:15.842748    8602 client.go:168] LocalClient.Create starting
	I0717 00:06:15.842881    8602 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem
	I0717 00:06:16.275846    8602 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/cert.pem
	I0717 00:06:16.736873    8602 cli_runner.go:164] Run: docker network inspect addons-579136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 00:06:16.750297    8602 cli_runner.go:211] docker network inspect addons-579136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 00:06:16.750387    8602 network_create.go:284] running [docker network inspect addons-579136] to gather additional debugging logs...
	I0717 00:06:16.750408    8602 cli_runner.go:164] Run: docker network inspect addons-579136
	W0717 00:06:16.764232    8602 cli_runner.go:211] docker network inspect addons-579136 returned with exit code 1
	I0717 00:06:16.764258    8602 network_create.go:287] error running [docker network inspect addons-579136]: docker network inspect addons-579136: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-579136 not found
	I0717 00:06:16.764270    8602 network_create.go:289] output of [docker network inspect addons-579136]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-579136 not found
	
	** /stderr **
	I0717 00:06:16.764368    8602 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 00:06:16.780120    8602 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c77ac0}
	I0717 00:06:16.780166    8602 network_create.go:124] attempt to create docker network addons-579136 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 00:06:16.780222    8602 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-579136 addons-579136
	I0717 00:06:16.852730    8602 network_create.go:108] docker network addons-579136 192.168.49.0/24 created
	I0717 00:06:16.852778    8602 kic.go:121] calculated static IP "192.168.49.2" for the "addons-579136" container
	I0717 00:06:16.852853    8602 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 00:06:16.868920    8602 cli_runner.go:164] Run: docker volume create addons-579136 --label name.minikube.sigs.k8s.io=addons-579136 --label created_by.minikube.sigs.k8s.io=true
	I0717 00:06:16.886934    8602 oci.go:103] Successfully created a docker volume addons-579136
	I0717 00:06:16.887034    8602 cli_runner.go:164] Run: docker run --rm --name addons-579136-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-579136 --entrypoint /usr/bin/test -v addons-579136:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -d /var/lib
	I0717 00:06:18.958653    8602 cli_runner.go:217] Completed: docker run --rm --name addons-579136-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-579136 --entrypoint /usr/bin/test -v addons-579136:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -d /var/lib: (2.071562665s)
	I0717 00:06:18.958684    8602 oci.go:107] Successfully prepared a docker volume addons-579136
	I0717 00:06:18.958713    8602 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:06:18.958733    8602 kic.go:194] Starting extracting preloaded images to volume ...
	I0717 00:06:18.958874    8602 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-579136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 00:06:23.123140    8602 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-579136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -I lz4 -xf /preloaded.tar -C /extractDir: (4.16421926s)
	I0717 00:06:23.123175    8602 kic.go:203] duration metric: took 4.164437589s to extract preloaded images to volume ...
	W0717 00:06:23.123316    8602 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 00:06:23.123429    8602 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 00:06:23.174777    8602 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-579136 --name addons-579136 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-579136 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-579136 --network addons-579136 --ip 192.168.49.2 --volume addons-579136:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c
	I0717 00:06:23.535643    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Running}}
	I0717 00:06:23.556735    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:06:23.579536    8602 cli_runner.go:164] Run: docker exec addons-579136 stat /var/lib/dpkg/alternatives/iptables
	I0717 00:06:23.656394    8602 oci.go:144] the created container "addons-579136" has a running status.
	I0717 00:06:23.656427    8602 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa...
	I0717 00:06:24.287958    8602 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 00:06:24.318159    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:06:24.345553    8602 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 00:06:24.345578    8602 kic_runner.go:114] Args: [docker exec --privileged addons-579136 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 00:06:24.412616    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:06:24.435254    8602 machine.go:94] provisionDockerMachine start ...
	I0717 00:06:24.435341    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:24.455456    8602 main.go:141] libmachine: Using SSH client type: native
	I0717 00:06:24.455850    8602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:06:24.455863    8602 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:06:24.590617    8602 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-579136
	
	I0717 00:06:24.590642    8602 ubuntu.go:169] provisioning hostname "addons-579136"
	I0717 00:06:24.590707    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:24.607310    8602 main.go:141] libmachine: Using SSH client type: native
	I0717 00:06:24.607598    8602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:06:24.607613    8602 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-579136 && echo "addons-579136" | sudo tee /etc/hostname
	I0717 00:06:24.750066    8602 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-579136
	
	I0717 00:06:24.750141    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:24.767268    8602 main.go:141] libmachine: Using SSH client type: native
	I0717 00:06:24.767509    8602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:06:24.767525    8602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-579136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-579136/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-579136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:06:24.894841    8602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:06:24.894868    8602 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19265-2269/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-2269/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-2269/.minikube}
	I0717 00:06:24.894892    8602 ubuntu.go:177] setting up certificates
	I0717 00:06:24.894902    8602 provision.go:84] configureAuth start
	I0717 00:06:24.894962    8602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-579136
	I0717 00:06:24.911118    8602 provision.go:143] copyHostCerts
	I0717 00:06:24.911209    8602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-2269/.minikube/ca.pem (1078 bytes)
	I0717 00:06:24.911332    8602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-2269/.minikube/cert.pem (1123 bytes)
	I0717 00:06:24.911391    8602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-2269/.minikube/key.pem (1679 bytes)
	I0717 00:06:24.911469    8602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-2269/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca-key.pem org=jenkins.addons-579136 san=[127.0.0.1 192.168.49.2 addons-579136 localhost minikube]
	I0717 00:06:25.459642    8602 provision.go:177] copyRemoteCerts
	I0717 00:06:25.459714    8602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:06:25.459755    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:25.475579    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:06:25.567235    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 00:06:25.590581    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:06:25.613942    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:06:25.637074    8602 provision.go:87] duration metric: took 742.156097ms to configureAuth
	I0717 00:06:25.637103    8602 ubuntu.go:193] setting minikube options for container-runtime
	I0717 00:06:25.637288    8602 config.go:182] Loaded profile config "addons-579136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:06:25.637398    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:25.653757    8602 main.go:141] libmachine: Using SSH client type: native
	I0717 00:06:25.654008    8602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:06:25.654028    8602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:06:25.881872    8602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:06:25.881899    8602 machine.go:97] duration metric: took 1.446626745s to provisionDockerMachine
	I0717 00:06:25.881910    8602 client.go:171] duration metric: took 10.039150985s to LocalClient.Create
	I0717 00:06:25.881922    8602 start.go:167] duration metric: took 10.039208472s to libmachine.API.Create "addons-579136"
	I0717 00:06:25.881930    8602 start.go:293] postStartSetup for "addons-579136" (driver="docker")
	I0717 00:06:25.881942    8602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:06:25.882009    8602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:06:25.882069    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:25.898919    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:06:25.992199    8602 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:06:25.995343    8602 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 00:06:25.995382    8602 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 00:06:25.995393    8602 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 00:06:25.995399    8602 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0717 00:06:25.995410    8602 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-2269/.minikube/addons for local assets ...
	I0717 00:06:25.995484    8602 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-2269/.minikube/files for local assets ...
	I0717 00:06:25.995513    8602 start.go:296] duration metric: took 113.575913ms for postStartSetup
	I0717 00:06:25.995825    8602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-579136
	I0717 00:06:26.011354    8602 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/config.json ...
	I0717 00:06:26.011633    8602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:06:26.011695    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:26.029759    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:06:26.123460    8602 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 00:06:26.127771    8602 start.go:128] duration metric: took 10.287865941s to createHost
	I0717 00:06:26.127796    8602 start.go:83] releasing machines lock for "addons-579136", held for 10.288020916s
	I0717 00:06:26.127868    8602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-579136
	I0717 00:06:26.144522    8602 ssh_runner.go:195] Run: cat /version.json
	I0717 00:06:26.144549    8602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:06:26.144583    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:26.144592    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:06:26.162314    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:06:26.164775    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:06:26.250197    8602 ssh_runner.go:195] Run: systemctl --version
	I0717 00:06:26.381267    8602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:06:26.522784    8602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 00:06:26.526747    8602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:06:26.546970    8602 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 00:06:26.547084    8602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:06:26.577386    8602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 00:06:26.577455    8602 start.go:495] detecting cgroup driver to use...
	I0717 00:06:26.577500    8602 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 00:06:26.577582    8602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:06:26.592419    8602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:06:26.603548    8602 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:06:26.603640    8602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:06:26.617198    8602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:06:26.631446    8602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:06:26.716086    8602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:06:26.804528    8602 docker.go:233] disabling docker service ...
	I0717 00:06:26.804621    8602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:06:26.822706    8602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:06:26.835110    8602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:06:26.912886    8602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:06:26.997678    8602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:06:27.009379    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:06:27.024882    8602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:06:27.024990    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.035299    8602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:06:27.035414    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.047736    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.058381    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.068470    8602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:06:27.077663    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.087414    8602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.103392    8602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:06:27.112734    8602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:06:27.121416    8602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:06:27.129716    8602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:06:27.209088    8602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:06:27.317896    8602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:06:27.317973    8602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:06:27.321310    8602 start.go:563] Will wait 60s for crictl version
	I0717 00:06:27.321415    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:06:27.324658    8602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:06:27.366722    8602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 00:06:27.366925    8602 ssh_runner.go:195] Run: crio --version
	I0717 00:06:27.404159    8602 ssh_runner.go:195] Run: crio --version
	I0717 00:06:27.452122    8602 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.24.6 ...
	I0717 00:06:27.454156    8602 cli_runner.go:164] Run: docker network inspect addons-579136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 00:06:27.469880    8602 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 00:06:27.473474    8602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:06:27.484111    8602 kubeadm.go:883] updating cluster {Name:addons-579136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-579136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:06:27.484231    8602 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:06:27.484291    8602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:06:27.559845    8602 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:06:27.559870    8602 crio.go:433] Images already preloaded, skipping extraction
	I0717 00:06:27.559927    8602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:06:27.597556    8602 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:06:27.597578    8602 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:06:27.597586    8602 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.2 crio true true} ...
	I0717 00:06:27.597692    8602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-579136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-579136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:06:27.597780    8602 ssh_runner.go:195] Run: crio config
	I0717 00:06:27.664543    8602 cni.go:84] Creating CNI manager for ""
	I0717 00:06:27.664564    8602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:06:27.664574    8602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:06:27.664613    8602 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-579136 NodeName:addons-579136 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:06:27.664780    8602 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-579136"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:06:27.664852    8602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:06:27.673409    8602 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:06:27.673479    8602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 00:06:27.681833    8602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0717 00:06:27.699068    8602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:06:27.716505    8602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0717 00:06:27.733914    8602 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 00:06:27.737276    8602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:06:27.747138    8602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:06:27.824330    8602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:06:27.838989    8602 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136 for IP: 192.168.49.2
	I0717 00:06:27.839068    8602 certs.go:194] generating shared ca certs ...
	I0717 00:06:27.839107    8602 certs.go:226] acquiring lock for ca certs: {Name:mkd227790b4a676b68da1df63243d6b7540ab556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:27.839292    8602 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-2269/.minikube/ca.key
	I0717 00:06:29.026214    8602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-2269/.minikube/ca.crt ...
	I0717 00:06:29.026249    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/ca.crt: {Name:mk3ec6da30a15bb4ce3cdc12ce9f3da174fadba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:29.026462    8602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-2269/.minikube/ca.key ...
	I0717 00:06:29.026477    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/ca.key: {Name:mkc30886f597901886d2d4c317e10e44fcbf8c2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:29.026565    8602 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.key
	I0717 00:06:29.523380    8602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.crt ...
	I0717 00:06:29.523410    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.crt: {Name:mk53d37825bc9a371566524346a60efd23148742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:29.523581    8602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.key ...
	I0717 00:06:29.523597    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.key: {Name:mk5d59d9b144da98619c094f3ce5d5210ba2947f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:29.523664    8602 certs.go:256] generating profile certs ...
	I0717 00:06:29.523724    8602 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.key
	I0717 00:06:29.523741    8602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt with IP's: []
	I0717 00:06:30.053601    8602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt ...
	I0717 00:06:30.053638    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: {Name:mkc3473735b3e7e9a0c799c64478883e8d7fe68a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.053852    8602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.key ...
	I0717 00:06:30.053862    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.key: {Name:mkb555ad5b349a8106a3d863d415f8009d89f511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.053926    8602 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key.fdc13f68
	I0717 00:06:30.053941    8602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt.fdc13f68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0717 00:06:30.467817    8602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt.fdc13f68 ...
	I0717 00:06:30.467848    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt.fdc13f68: {Name:mk83c745853da1eccfdb14386e08c9a4fe32a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.468031    8602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key.fdc13f68 ...
	I0717 00:06:30.468045    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key.fdc13f68: {Name:mk39ed371af8bbd6214602955d59605f4575606b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.468124    8602 certs.go:381] copying /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt.fdc13f68 -> /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt
	I0717 00:06:30.468208    8602 certs.go:385] copying /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key.fdc13f68 -> /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key
	I0717 00:06:30.468267    8602 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.key
	I0717 00:06:30.468287    8602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.crt with IP's: []
	I0717 00:06:30.655315    8602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.crt ...
	I0717 00:06:30.655342    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.crt: {Name:mke72778136fdede5583f9c6d7fc9346ea22d347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.655509    8602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.key ...
	I0717 00:06:30.655520    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.key: {Name:mke6142835b5c3458149a3dbe00b0cf5d87082fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:06:30.655706    8602 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:06:30.655746    8602 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:06:30.655780    8602 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:06:30.655809    8602 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-2269/.minikube/certs/key.pem (1679 bytes)
	I0717 00:06:30.656809    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:06:30.681949    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 00:06:30.705535    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:06:30.728299    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:06:30.751969    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 00:06:30.775188    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:06:30.798447    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:06:30.821855    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:06:30.845628    8602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-2269/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:06:30.869557    8602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:06:30.886975    8602 ssh_runner.go:195] Run: openssl version
	I0717 00:06:30.892481    8602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:06:30.901974    8602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:06:30.905331    8602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:06:30.905432    8602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:06:30.912195    8602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:06:30.921010    8602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:06:30.924172    8602 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:06:30.924223    8602 kubeadm.go:392] StartCluster: {Name:addons-579136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-579136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:06:30.924303    8602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:06:30.924374    8602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:06:30.965911    8602 cri.go:89] found id: ""
	I0717 00:06:30.965995    8602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:06:30.975115    8602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:06:30.983989    8602 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0717 00:06:30.984056    8602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:06:30.992636    8602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:06:30.992654    8602 kubeadm.go:157] found existing configuration files:
	
	I0717 00:06:30.992704    8602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:06:31.001695    8602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:06:31.001811    8602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:06:31.017086    8602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:06:31.026103    8602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:06:31.026171    8602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:06:31.035583    8602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:06:31.045905    8602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:06:31.045973    8602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:06:31.055236    8602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:06:31.064518    8602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:06:31.064582    8602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:06:31.073157    8602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 00:06:31.120178    8602 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:06:31.120467    8602 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:06:31.162112    8602 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0717 00:06:31.162187    8602 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1064-aws
	I0717 00:06:31.162226    8602 kubeadm.go:310] OS: Linux
	I0717 00:06:31.162276    8602 kubeadm.go:310] CGROUPS_CPU: enabled
	I0717 00:06:31.162328    8602 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0717 00:06:31.162378    8602 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0717 00:06:31.162441    8602 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0717 00:06:31.162491    8602 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0717 00:06:31.162544    8602 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0717 00:06:31.162592    8602 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0717 00:06:31.162644    8602 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0717 00:06:31.162694    8602 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0717 00:06:31.225910    8602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:06:31.226019    8602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:06:31.226115    8602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:06:31.452427    8602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:06:31.456435    8602 out.go:204]   - Generating certificates and keys ...
	I0717 00:06:31.456564    8602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:06:31.456636    8602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:06:32.265600    8602 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:06:32.475865    8602 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:06:33.209980    8602 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:06:33.791398    8602 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:06:34.853611    8602 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:06:34.853922    8602 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-579136 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 00:06:35.223691    8602 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:06:35.224043    8602 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-579136 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 00:06:35.950199    8602 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:06:36.481470    8602 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:06:36.866085    8602 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:06:36.866381    8602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:06:37.288151    8602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:06:37.898653    8602 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:06:38.367882    8602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:06:39.066960    8602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:06:39.627960    8602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:06:39.628583    8602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:06:39.631314    8602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:06:39.633392    8602 out.go:204]   - Booting up control plane ...
	I0717 00:06:39.633500    8602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:06:39.633591    8602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:06:39.636083    8602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:06:39.646375    8602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:06:39.647288    8602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:06:39.647559    8602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:06:39.739021    8602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:06:39.739115    8602 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:06:41.240521    8602 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501349412s
	I0717 00:06:41.240605    8602 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:06:47.249078    8602 kubeadm.go:310] [api-check] The API server is healthy after 6.006736063s
	I0717 00:06:47.286812    8602 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:06:47.304756    8602 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:06:47.346084    8602 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:06:47.346298    8602 kubeadm.go:310] [mark-control-plane] Marking the node addons-579136 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:06:47.357648    8602 kubeadm.go:310] [bootstrap-token] Using token: svznq2.hqfvh980hynisrq7
	I0717 00:06:47.359735    8602 out.go:204]   - Configuring RBAC rules ...
	I0717 00:06:47.359875    8602 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:06:47.364737    8602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:06:47.371785    8602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:06:47.374977    8602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:06:47.379436    8602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:06:47.383380    8602 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:06:47.655186    8602 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:06:48.100875    8602 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:06:48.652987    8602 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:06:48.654076    8602 kubeadm.go:310] 
	I0717 00:06:48.654153    8602 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:06:48.654159    8602 kubeadm.go:310] 
	I0717 00:06:48.654233    8602 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:06:48.654238    8602 kubeadm.go:310] 
	I0717 00:06:48.654262    8602 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:06:48.654319    8602 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:06:48.654370    8602 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:06:48.654375    8602 kubeadm.go:310] 
	I0717 00:06:48.654426    8602 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:06:48.654431    8602 kubeadm.go:310] 
	I0717 00:06:48.654476    8602 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:06:48.654484    8602 kubeadm.go:310] 
	I0717 00:06:48.654535    8602 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:06:48.654606    8602 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:06:48.654671    8602 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:06:48.654676    8602 kubeadm.go:310] 
	I0717 00:06:48.654779    8602 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:06:48.654854    8602 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:06:48.654858    8602 kubeadm.go:310] 
	I0717 00:06:48.654939    8602 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token svznq2.hqfvh980hynisrq7 \
	I0717 00:06:48.655038    8602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:364e57b24df01cc43b6451b84edd589741e4028e3e02ff8d2cf1063ebd74c881 \
	I0717 00:06:48.655058    8602 kubeadm.go:310] 	--control-plane 
	I0717 00:06:48.655063    8602 kubeadm.go:310] 
	I0717 00:06:48.655144    8602 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:06:48.655148    8602 kubeadm.go:310] 
	I0717 00:06:48.655227    8602 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token svznq2.hqfvh980hynisrq7 \
	I0717 00:06:48.655330    8602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:364e57b24df01cc43b6451b84edd589741e4028e3e02ff8d2cf1063ebd74c881 
	I0717 00:06:48.657407    8602 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1064-aws\n", err: exit status 1
	I0717 00:06:48.657522    8602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:06:48.657545    8602 cni.go:84] Creating CNI manager for ""
	I0717 00:06:48.657556    8602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:06:48.659655    8602 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 00:06:48.661620    8602 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 00:06:48.665364    8602 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 00:06:48.665383    8602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 00:06:48.683457    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 00:06:48.932187    8602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:06:48.932320    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-579136 minikube.k8s.io/updated_at=2024_07_17T00_06_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=addons-579136 minikube.k8s.io/primary=true
	I0717 00:06:48.932351    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:48.945171    8602 ops.go:34] apiserver oom_adj: -16
	I0717 00:06:49.029893    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:49.530873    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:50.030026    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:50.530843    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:51.030435    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:51.530634    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:52.030959    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:52.530669    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:53.030015    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:53.530901    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:54.030936    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:54.530138    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:55.030684    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:55.530964    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:56.030313    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:56.530560    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:57.030137    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:57.530795    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:58.030897    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:58.530215    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:59.030449    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:06:59.530861    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:07:00.030903    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:07:00.530897    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:07:01.030660    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:07:01.530611    8602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:07:01.623571    8602 kubeadm.go:1113] duration metric: took 12.691302844s to wait for elevateKubeSystemPrivileges
	I0717 00:07:01.623604    8602 kubeadm.go:394] duration metric: took 30.699384745s to StartCluster
	I0717 00:07:01.623622    8602 settings.go:142] acquiring lock: {Name:mk883dff9b09cfe64fa59919f3a5dca1089afb6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:07:01.623739    8602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-2269/kubeconfig
	I0717 00:07:01.624179    8602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/kubeconfig: {Name:mk7d21bd0dadef6e1232ea2d159c34b00c02e88a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:07:01.624392    8602 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:07:01.624494    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:07:01.624745    8602 config.go:182] Loaded profile config "addons-579136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:07:01.624784    8602 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 00:07:01.624860    8602 addons.go:69] Setting yakd=true in profile "addons-579136"
	I0717 00:07:01.624885    8602 addons.go:234] Setting addon yakd=true in "addons-579136"
	I0717 00:07:01.624911    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.625370    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.625832    8602 addons.go:69] Setting cloud-spanner=true in profile "addons-579136"
	I0717 00:07:01.625872    8602 addons.go:234] Setting addon cloud-spanner=true in "addons-579136"
	I0717 00:07:01.625901    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.626356    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.626478    8602 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-579136"
	I0717 00:07:01.626503    8602 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-579136"
	I0717 00:07:01.626530    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.626799    8602 addons.go:69] Setting registry=true in profile "addons-579136"
	I0717 00:07:01.626826    8602 addons.go:234] Setting addon registry=true in "addons-579136"
	I0717 00:07:01.626850    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.626913    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.627329    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.629656    8602 addons.go:69] Setting storage-provisioner=true in profile "addons-579136"
	I0717 00:07:01.629695    8602 addons.go:234] Setting addon storage-provisioner=true in "addons-579136"
	I0717 00:07:01.629735    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.630140    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.630773    8602 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-579136"
	I0717 00:07:01.630833    8602 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-579136"
	I0717 00:07:01.630859    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.631253    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.637010    8602 addons.go:69] Setting default-storageclass=true in profile "addons-579136"
	I0717 00:07:01.637098    8602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-579136"
	I0717 00:07:01.637432    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.637758    8602 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-579136"
	I0717 00:07:01.637792    8602 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-579136"
	I0717 00:07:01.638134    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.648553    8602 addons.go:69] Setting volcano=true in profile "addons-579136"
	I0717 00:07:01.648652    8602 addons.go:234] Setting addon volcano=true in "addons-579136"
	I0717 00:07:01.648722    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.649202    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.649499    8602 addons.go:69] Setting gcp-auth=true in profile "addons-579136"
	I0717 00:07:01.649568    8602 mustload.go:65] Loading cluster: addons-579136
	I0717 00:07:01.649744    8602 config.go:182] Loaded profile config "addons-579136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:07:01.650005    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.681858    8602 addons.go:69] Setting ingress=true in profile "addons-579136"
	I0717 00:07:01.681991    8602 addons.go:234] Setting addon ingress=true in "addons-579136"
	I0717 00:07:01.682378    8602 addons.go:69] Setting volumesnapshots=true in profile "addons-579136"
	I0717 00:07:01.682463    8602 addons.go:234] Setting addon volumesnapshots=true in "addons-579136"
	I0717 00:07:01.682518    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.682329    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.688041    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.687447    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.732058    8602 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 00:07:01.734048    8602 addons.go:234] Setting addon default-storageclass=true in "addons-579136"
	I0717 00:07:01.734137    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.734689    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.687531    8602 out.go:177] * Verifying Kubernetes components...
	I0717 00:07:01.722844    8602 addons.go:69] Setting ingress-dns=true in profile "addons-579136"
	I0717 00:07:01.744445    8602 addons.go:234] Setting addon ingress-dns=true in "addons-579136"
	I0717 00:07:01.744534    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.745067    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.752477    8602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:07:01.722858    8602 addons.go:69] Setting inspektor-gadget=true in profile "addons-579136"
	I0717 00:07:01.756782    8602 addons.go:234] Setting addon inspektor-gadget=true in "addons-579136"
	I0717 00:07:01.756864    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.757915    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.775702    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.722865    8602 addons.go:69] Setting metrics-server=true in profile "addons-579136"
	I0717 00:07:01.790280    8602 addons.go:234] Setting addon metrics-server=true in "addons-579136"
	I0717 00:07:01.790347    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.790859    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.799780    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 00:07:01.803618    8602 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 00:07:01.743423    8602 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 00:07:01.744411    8602 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-579136"
	I0717 00:07:01.804040    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:01.804472    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:01.818106    8602 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 00:07:01.818190    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.826937    8602 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 00:07:01.833038    8602 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 00:07:01.836417    8602 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 00:07:01.836480    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 00:07:01.836594    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.846873    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 00:07:01.850890    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 00:07:01.855603    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 00:07:01.858931    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 00:07:01.861327    8602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 00:07:01.861453    8602 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:07:01.861471    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 00:07:01.861539    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.874962    8602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:07:01.861372    8602 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 00:07:01.877943    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	W0717 00:07:01.878204    8602 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 00:07:01.882252    8602 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 00:07:01.882273    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 00:07:01.882336    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.899383    8602 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:07:01.899666    8602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:07:01.903042    8602 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:07:01.903067    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 00:07:01.903134    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.903394    8602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:07:01.903425    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:07:01.903488    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.941967    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 00:07:01.944012    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 00:07:01.948618    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 00:07:01.948647    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 00:07:01.948729    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.973621    8602 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 00:07:01.974566    8602 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:07:01.974588    8602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:07:01.974666    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:01.976858    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 00:07:01.976878    8602 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 00:07:01.976935    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:02.000511    8602 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 00:07:02.000713    8602 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 00:07:02.003759    8602 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 00:07:02.003783    8602 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 00:07:02.004038    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:02.004114    8602 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:07:02.004125    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 00:07:02.004278    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:02.021337    8602 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 00:07:02.023384    8602 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 00:07:02.023415    8602 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 00:07:02.023487    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:02.040543    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.041436    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.043359    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.044341    8602 out.go:177]   - Using image docker.io/busybox:stable
	I0717 00:07:02.046900    8602 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 00:07:02.050085    8602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:07:02.050108    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 00:07:02.050179    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:02.060880    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:07:02.142850    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.159249    8602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:07:02.172034    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.175383    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.175468    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.201914    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.212673    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.213515    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.216985    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.217853    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.223734    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:02.368547    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:07:02.488567    8602 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 00:07:02.488586    8602 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 00:07:02.491976    8602 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 00:07:02.491993    8602 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 00:07:02.580945    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 00:07:02.630444    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:07:02.636982    8602 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 00:07:02.637052    8602 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 00:07:02.641441    8602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 00:07:02.641497    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 00:07:02.645972    8602 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:07:02.646038    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 00:07:02.648905    8602 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 00:07:02.648959    8602 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 00:07:02.652030    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:07:02.657096    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:07:02.661875    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:07:02.679389    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:07:02.713394    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 00:07:02.713465    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 00:07:02.720533    8602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 00:07:02.720603    8602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 00:07:02.721360    8602 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 00:07:02.721416    8602 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 00:07:02.808185    8602 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 00:07:02.808259    8602 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 00:07:02.812366    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:07:02.847882    8602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 00:07:02.847959    8602 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 00:07:02.891717    8602 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 00:07:02.891786    8602 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 00:07:02.898020    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 00:07:02.898079    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 00:07:02.908561    8602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 00:07:02.908640    8602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 00:07:02.945142    8602 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:07:02.945211    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 00:07:02.957363    8602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:07:02.957386    8602 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 00:07:03.018535    8602 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 00:07:03.018608    8602 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 00:07:03.052616    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 00:07:03.052687    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 00:07:03.056820    8602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 00:07:03.056895    8602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 00:07:03.096683    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:07:03.109085    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:07:03.185848    8602 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 00:07:03.185936    8602 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 00:07:03.209359    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 00:07:03.209430    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 00:07:03.212571    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 00:07:03.212630    8602 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 00:07:03.247696    8602 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 00:07:03.247776    8602 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 00:07:03.325417    8602 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:07:03.325485    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 00:07:03.330175    8602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 00:07:03.330234    8602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 00:07:03.333374    8602 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:07:03.333439    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 00:07:03.411920    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:07:03.426187    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 00:07:03.426286    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 00:07:03.447309    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:07:03.471604    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 00:07:03.471674    8602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 00:07:03.547618    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 00:07:03.547690    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 00:07:03.586432    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 00:07:03.586502    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 00:07:03.653459    8602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:07:03.653527    8602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 00:07:03.720466    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:07:05.215869    8602 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.154946248s)
	I0717 00:07:05.215944    8602 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 00:07:05.216342    8602 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.057070314s)
	I0717 00:07:05.217969    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.849386025s)
	I0717 00:07:05.221805    8602 node_ready.go:35] waiting up to 6m0s for node "addons-579136" to be "Ready" ...
	I0717 00:07:05.857041    8602 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-579136" context rescaled to 1 replicas
	I0717 00:07:06.117696    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.536678906s)
	I0717 00:07:06.758470    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.127946616s)
	I0717 00:07:07.259476    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:08.295403    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.643291053s)
	I0717 00:07:08.295896    8602 addons.go:475] Verifying addon ingress=true in "addons-579136"
	I0717 00:07:08.295600    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.638442647s)
	I0717 00:07:08.295628    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.633688415s)
	I0717 00:07:08.295656    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.616205272s)
	I0717 00:07:08.295691    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.483268636s)
	I0717 00:07:08.296482    8602 addons.go:475] Verifying addon registry=true in "addons-579136"
	I0717 00:07:08.295741    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.198989154s)
	I0717 00:07:08.296801    8602 addons.go:475] Verifying addon metrics-server=true in "addons-579136"
	I0717 00:07:08.295771    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.18661712s)
	I0717 00:07:08.295825    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.883836952s)
	I0717 00:07:08.298661    8602 out.go:177] * Verifying ingress addon...
	I0717 00:07:08.300642    8602 out.go:177] * Verifying registry addon...
	I0717 00:07:08.300730    8602 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-579136 service yakd-dashboard -n yakd-dashboard
	
	I0717 00:07:08.303123    8602 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 00:07:08.304717    8602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 00:07:08.325588    8602 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:07:08.325667    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:08.326485    8602 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 00:07:08.326529    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.343410    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.896010019s)
	W0717 00:07:08.343511    8602 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:07:08.343554    8602 retry.go:31] will retry after 366.531691ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	W0717 00:07:08.346481    8602 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0717 00:07:08.693196    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.972624246s)
	I0717 00:07:08.693274    8602 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-579136"
	I0717 00:07:08.697284    8602 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 00:07:08.700232    8602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 00:07:08.707781    8602 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:07:08.707848    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:08.711125    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:07:08.810834    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.821054    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:09.204932    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:09.351612    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:09.352469    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.707972    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:09.725086    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:09.807115    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.809714    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:10.162087    8602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.450874475s)
	I0717 00:07:10.207302    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:10.308814    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:10.309609    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:10.705631    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:10.810742    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:10.815720    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:10.866689    8602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 00:07:10.866806    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:10.902823    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:11.032849    8602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 00:07:11.061737    8602 addons.go:234] Setting addon gcp-auth=true in "addons-579136"
	I0717 00:07:11.061790    8602 host.go:66] Checking if "addons-579136" exists ...
	I0717 00:07:11.062215    8602 cli_runner.go:164] Run: docker container inspect addons-579136 --format={{.State.Status}}
	I0717 00:07:11.089779    8602 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 00:07:11.089845    8602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579136
	I0717 00:07:11.119669    8602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/addons-579136/id_rsa Username:docker}
	I0717 00:07:11.204710    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:11.221194    8602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:07:11.222691    8602 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 00:07:11.224423    8602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 00:07:11.224449    8602 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 00:07:11.253749    8602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 00:07:11.253775    8602 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 00:07:11.292932    8602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:07:11.292961    8602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 00:07:11.309042    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:11.312704    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:11.324108    8602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:07:11.705084    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:11.726297    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:11.809841    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:11.811144    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:12.062790    8602 addons.go:475] Verifying addon gcp-auth=true in "addons-579136"
	I0717 00:07:12.065081    8602 out.go:177] * Verifying gcp-auth addon...
	I0717 00:07:12.068292    8602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 00:07:12.087167    8602 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 00:07:12.087194    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:12.204755    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:12.309345    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:12.311432    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:12.572346    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:12.704291    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:12.813422    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:12.815402    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:13.073192    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:13.205812    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:13.308675    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:13.309316    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:13.571667    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:13.704307    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:13.808785    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:13.811555    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:14.073503    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:14.205525    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:14.226272    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:14.308019    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:14.309029    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:14.572679    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:14.705128    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:14.809673    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:14.810054    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:15.072356    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:15.204994    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:15.308674    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:15.309079    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:15.572200    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:15.704828    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:15.807945    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:15.810181    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:16.072350    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:16.205139    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:16.308486    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:16.309452    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:16.572175    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:16.705231    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:16.725864    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:16.808246    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:16.808654    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:17.072237    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:17.204224    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:17.307608    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:17.308669    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:17.571821    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:17.705386    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:17.807483    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:17.808486    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:18.073659    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:18.204688    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:18.308373    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:18.309802    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:18.571551    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:18.704098    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:18.807176    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:18.812872    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:19.072433    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:19.204505    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:19.225078    8602 node_ready.go:53] node "addons-579136" has status "Ready":"False"
	I0717 00:07:19.306860    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:19.309374    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:19.573474    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:19.756562    8602 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:07:19.756588    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:19.761429    8602 node_ready.go:49] node "addons-579136" has status "Ready":"True"
	I0717 00:07:19.761454    8602 node_ready.go:38] duration metric: took 14.539589335s for node "addons-579136" to be "Ready" ...
	I0717 00:07:19.761465    8602 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:07:19.816478    8602 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p58r6" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:19.870425    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:19.878836    8602 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:07:19.878862    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:20.072324    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:20.209882    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:20.318533    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:20.336550    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:20.573016    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:20.706998    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:20.810479    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:20.811442    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:21.072594    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:21.206445    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:21.308395    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:21.310431    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:21.571310    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:21.706534    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:21.809490    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:21.810111    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:21.823413    8602 pod_ready.go:92] pod "coredns-7db6d8ff4d-p58r6" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:21.823436    8602 pod_ready.go:81] duration metric: took 2.006918796s for pod "coredns-7db6d8ff4d-p58r6" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.823460    8602 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.830728    8602 pod_ready.go:92] pod "etcd-addons-579136" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:21.830813    8602 pod_ready.go:81] duration metric: took 7.34477ms for pod "etcd-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.830844    8602 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.840242    8602 pod_ready.go:92] pod "kube-apiserver-addons-579136" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:21.840306    8602 pod_ready.go:81] duration metric: took 9.440382ms for pod "kube-apiserver-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.840335    8602 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.845704    8602 pod_ready.go:92] pod "kube-controller-manager-addons-579136" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:21.845775    8602 pod_ready.go:81] duration metric: took 5.41064ms for pod "kube-controller-manager-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.845805    8602 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b7z7h" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.861543    8602 pod_ready.go:92] pod "kube-proxy-b7z7h" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:21.861616    8602 pod_ready.go:81] duration metric: took 15.783777ms for pod "kube-proxy-b7z7h" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:21.861644    8602 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:22.072692    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:22.206276    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:22.220724    8602 pod_ready.go:92] pod "kube-scheduler-addons-579136" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:22.220749    8602 pod_ready.go:81] duration metric: took 359.083854ms for pod "kube-scheduler-addons-579136" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:22.220790    8602 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:22.311144    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:22.312446    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:22.573794    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:22.707006    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:22.809028    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:22.809862    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:23.072237    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:23.206142    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:23.308646    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:23.309553    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:23.571887    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:23.706331    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:23.811090    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:23.822558    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:24.073950    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:24.207015    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:24.228139    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:24.311237    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:24.317661    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:24.572739    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:24.707226    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:24.812479    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:24.813326    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:25.073274    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:25.205857    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:25.309517    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:25.310732    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:25.572678    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:25.706160    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:25.807676    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:25.811309    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:26.073658    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:26.206421    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:26.232156    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:26.309089    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:26.314325    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:26.575254    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:26.708164    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:26.816680    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:26.818396    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:27.071724    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:27.206673    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:27.311620    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:27.312738    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:27.572860    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:27.706482    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:27.809178    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:27.814128    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:28.071695    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:28.205880    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:28.308024    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:28.310697    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:28.580159    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:28.705863    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:28.727380    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:28.807383    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:28.809973    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:29.072043    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:29.206724    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:29.307525    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:29.309886    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:29.572555    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:29.706284    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:29.809038    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:29.810396    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:30.073383    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:30.206618    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:30.309073    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:30.316497    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:30.572442    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:30.719103    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:30.731005    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:30.808955    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:30.810136    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:31.073163    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:31.206726    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:31.309840    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:31.310973    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:31.572397    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:31.706893    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:31.808390    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:31.818378    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:32.072247    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:32.206239    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:32.311236    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:32.312734    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:32.571943    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:32.707055    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:32.811211    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:32.814504    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:33.072735    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:33.210996    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:33.234500    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:33.310297    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:33.311202    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:33.571988    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:33.707235    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:33.816193    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:33.817805    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:34.073610    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:34.205796    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:34.308532    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:34.310682    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:34.577447    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:34.707658    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:34.809018    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:34.811869    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:35.072580    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:35.206713    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:35.307543    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:35.310438    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:35.573084    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:35.706174    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:35.729593    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:35.806993    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:35.809959    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:36.072728    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:36.205508    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:36.308743    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:36.309580    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:36.572156    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:36.705811    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:36.808966    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:36.810422    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:37.073418    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:37.209013    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:37.310675    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:37.319840    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:37.573754    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:37.707669    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:37.730800    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:37.812162    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:37.813061    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:38.071899    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:38.207787    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:38.311429    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:38.313438    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:38.574064    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:38.706950    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:38.810238    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:38.813488    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:39.087960    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:39.212538    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:39.314296    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:39.315905    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:39.577637    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:39.707991    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:39.822112    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:39.826129    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:40.083845    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:40.215608    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:40.237607    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:40.321668    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:40.322867    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:40.572273    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:40.722556    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:40.814632    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:40.815499    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:41.074935    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:41.207910    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:41.310482    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:41.317966    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:41.572395    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:41.707156    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:41.808221    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:41.810496    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:42.072643    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:42.207878    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:42.308767    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:42.318164    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:42.572409    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:42.708168    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:42.743242    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:42.818568    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:42.819540    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:43.073080    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:43.208020    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:43.314591    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:43.315995    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:43.572254    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:43.708885    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:43.811100    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:43.814181    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:44.072340    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:44.209086    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:44.310441    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:44.319873    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:44.573999    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:44.706844    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:44.809352    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:44.815560    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:45.075321    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:45.208058    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:45.228798    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:45.318558    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:45.319977    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:45.573169    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:45.705829    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:45.820515    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:45.821941    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:46.072922    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:46.207245    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:46.309309    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:46.315037    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:46.575736    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:46.709955    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:46.813988    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:46.816092    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:47.076340    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:47.208747    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:47.236993    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:47.311391    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:47.315799    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:47.572783    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:47.709658    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:47.812710    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:47.814614    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:48.072955    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:48.206365    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:48.309476    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:48.312610    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:48.577214    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:48.705408    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:48.809771    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:48.810606    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:49.072239    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:49.206031    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:49.311321    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:49.312354    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:49.571967    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:49.706041    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:49.726416    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:49.808374    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:49.809655    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:50.074233    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:50.208359    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:50.307388    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:50.311012    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:50.571948    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:50.706855    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:50.807612    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:50.811211    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:51.071690    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:51.206261    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:51.308339    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:51.309793    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:51.572690    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:51.706287    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:51.727382    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:51.807867    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:51.810109    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:52.072112    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:52.205913    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:52.312295    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:52.313652    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:52.572783    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:52.730422    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:52.810213    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:52.813250    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:53.074908    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:53.207644    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:53.311714    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:53.315963    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:53.581520    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:53.708269    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:53.734065    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:53.811584    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:53.812392    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:54.072939    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:54.209246    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:54.314361    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:54.321234    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:54.573156    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:54.707936    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:54.822725    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:54.824528    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:55.072642    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:55.206313    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:55.308189    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:55.310162    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:55.572886    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:55.706381    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:55.809977    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:55.810348    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:56.073813    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:56.247249    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:56.248357    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:56.315215    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:56.316100    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:56.576661    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:56.705786    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:56.811037    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:56.818626    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:57.073188    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:57.216290    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:57.310854    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:57.326454    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:57.572221    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:57.707543    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:57.820117    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:57.824058    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:58.072659    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:58.206637    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:58.320487    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:58.323682    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:58.572336    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:58.713483    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:58.732443    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:58.812450    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:58.812650    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:59.072538    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:59.213120    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:59.308917    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:59.311003    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:59.571677    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:59.705623    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:07:59.809531    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:07:59.810240    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:00.081379    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:00.212345    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:00.309400    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:00.312484    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:00.572146    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:00.711897    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:00.811680    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:00.812320    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:01.073140    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:01.207057    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:01.230541    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:01.311226    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:01.311569    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:01.571804    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:01.706159    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:01.807750    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:01.810644    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:02.072748    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:02.206066    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:02.310014    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:02.311938    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:02.575222    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:02.705435    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:02.808915    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:02.811823    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:03.073004    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:03.206464    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:03.233906    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:03.308748    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:03.315453    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:03.572185    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:03.706367    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:03.810365    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:03.811312    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:04.072253    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:04.206025    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:04.309204    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:04.312263    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:04.571838    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:04.707080    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:04.811827    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:04.820579    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:05.073369    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:05.215031    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:05.315133    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:05.333813    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:05.572967    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:05.706244    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:05.727170    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:05.808039    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:05.812555    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:06.072595    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:06.209573    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:06.312832    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:06.314613    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:06.572986    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:06.711079    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:06.811346    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:06.814082    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:07.073130    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:07.207956    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:07.321290    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:07.322900    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:07.572584    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:07.706392    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:07.729626    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:07.808341    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:07.815862    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:08.073147    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:08.208273    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:08.311007    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:08.314519    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:08.572603    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:08.706263    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:08.810947    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:08.813704    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:09.072817    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:09.206416    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:09.307827    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:09.308892    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:08:09.571542    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:09.706174    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:09.808227    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:09.812497    8602 kapi.go:107] duration metric: took 1m1.507773473s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 00:08:10.073058    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:10.207915    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:10.231579    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:10.309510    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:10.579797    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:10.705910    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:10.809786    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:11.072448    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:11.205848    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:11.307344    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:11.572281    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:11.705674    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:11.808086    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:12.071842    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:12.209352    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:12.309119    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:12.572907    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:12.707694    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:12.737869    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:12.810409    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:13.072297    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:13.206645    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:13.308175    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:13.571525    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:13.705578    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:13.807408    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:14.071843    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:14.206704    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:14.307465    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:14.583629    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:14.705929    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:14.808912    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:15.073093    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:15.207690    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:15.229370    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:15.307619    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:15.572104    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:15.706515    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:15.807416    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:16.071877    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:16.206218    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:16.307800    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:16.573258    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:16.707345    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:16.808948    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:17.073036    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:17.237232    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:17.262188    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:17.324491    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:17.574624    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:17.705842    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:17.807655    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:18.072572    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:18.208084    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:18.307912    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:18.572400    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:18.711148    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:18.807669    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:19.072828    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:19.207116    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:19.312764    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:19.572543    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:19.708488    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:19.727563    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:19.807790    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:20.072560    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:20.206145    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:08:20.307771    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:20.572152    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:20.705555    8602 kapi.go:107] duration metric: took 1m12.005318341s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 00:08:20.807852    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:21.072564    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:21.308490    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:21.571951    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:21.808981    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:22.072308    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:22.228140    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:22.307405    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:22.572740    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:22.807398    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:23.071900    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:23.307729    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:23.572229    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:23.807730    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:24.072336    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:24.307587    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:24.571959    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:24.727098    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:24.807611    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:25.072214    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:25.309712    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:25.572450    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:25.808898    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:26.072961    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:26.308538    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:26.572325    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:26.727981    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:26.809063    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:27.073580    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:27.308816    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:27.574507    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:27.809501    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:28.072323    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:28.308260    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:28.571915    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:28.808558    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:29.073146    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:29.227021    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:29.309100    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:29.571902    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:29.808090    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:30.081192    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:30.309711    8602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:08:30.572935    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:30.819128    8602 kapi.go:107] duration metric: took 1m22.51600148s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 00:08:31.073851    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:31.227307    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:31.571985    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:32.073639    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:32.573124    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:33.071686    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:33.571536    8602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:08:33.726865    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:34.072360    8602 kapi.go:107] duration metric: took 1m22.004067887s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 00:08:34.074173    8602 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-579136 cluster.
	I0717 00:08:34.075741    8602 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 00:08:34.077408    8602 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 00:08:34.079303    8602 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 00:08:34.080681    8602 addons.go:510] duration metric: took 1m32.45589318s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 00:08:35.727168    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:38.227275    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:40.228413    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:42.726951    8602 pod_ready.go:102] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"False"
	I0717 00:08:43.226927    8602 pod_ready.go:92] pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace has status "Ready":"True"
	I0717 00:08:43.226949    8602 pod_ready.go:81] duration metric: took 1m21.006143148s for pod "metrics-server-c59844bb4-hqndr" in "kube-system" namespace to be "Ready" ...
	I0717 00:08:43.226961    8602 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-r42hf" in "kube-system" namespace to be "Ready" ...
	I0717 00:08:43.231997    8602 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-r42hf" in "kube-system" namespace has status "Ready":"True"
	I0717 00:08:43.232021    8602 pod_ready.go:81] duration metric: took 5.053134ms for pod "nvidia-device-plugin-daemonset-r42hf" in "kube-system" namespace to be "Ready" ...
	I0717 00:08:43.232042    8602 pod_ready.go:38] duration metric: took 1m23.470563589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:08:43.232770    8602 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:08:43.233453    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:08:43.233524    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:08:43.282604    8602 cri.go:89] found id: "84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:43.282625    8602 cri.go:89] found id: ""
	I0717 00:08:43.282632    8602 logs.go:276] 1 containers: [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1]
	I0717 00:08:43.282688    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.287152    8602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:08:43.287234    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:08:43.326744    8602 cri.go:89] found id: "be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:43.326800    8602 cri.go:89] found id: ""
	I0717 00:08:43.326809    8602 logs.go:276] 1 containers: [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39]
	I0717 00:08:43.326867    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.330556    8602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:08:43.330632    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:08:43.368363    8602 cri.go:89] found id: "33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:43.368385    8602 cri.go:89] found id: ""
	I0717 00:08:43.368394    8602 logs.go:276] 1 containers: [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21]
	I0717 00:08:43.368459    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.371960    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:08:43.372028    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:08:43.411423    8602 cri.go:89] found id: "eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:43.411446    8602 cri.go:89] found id: ""
	I0717 00:08:43.411454    8602 logs.go:276] 1 containers: [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd]
	I0717 00:08:43.411511    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.415141    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:08:43.415211    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:08:43.457442    8602 cri.go:89] found id: "f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:43.457460    8602 cri.go:89] found id: ""
	I0717 00:08:43.457469    8602 logs.go:276] 1 containers: [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a]
	I0717 00:08:43.457522    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.460894    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:08:43.460982    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:08:43.505085    8602 cri.go:89] found id: "8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:43.505107    8602 cri.go:89] found id: ""
	I0717 00:08:43.505115    8602 logs.go:276] 1 containers: [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883]
	I0717 00:08:43.505169    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.508801    8602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:08:43.508869    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:08:43.548185    8602 cri.go:89] found id: "8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:43.548247    8602 cri.go:89] found id: ""
	I0717 00:08:43.548269    8602 logs.go:276] 1 containers: [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e]
	I0717 00:08:43.548339    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:43.551874    8602 logs.go:123] Gathering logs for kube-controller-manager [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883] ...
	I0717 00:08:43.551897    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:43.625867    8602 logs.go:123] Gathering logs for container status ...
	I0717 00:08:43.625898    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:08:43.705960    8602 logs.go:123] Gathering logs for kube-apiserver [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1] ...
	I0717 00:08:43.706000    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:43.777756    8602 logs.go:123] Gathering logs for etcd [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39] ...
	I0717 00:08:43.777786    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:43.828700    8602 logs.go:123] Gathering logs for coredns [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21] ...
	I0717 00:08:43.828734    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:43.878952    8602 logs.go:123] Gathering logs for kube-proxy [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a] ...
	I0717 00:08:43.879046    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:43.918154    8602 logs.go:123] Gathering logs for kindnet [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e] ...
	I0717 00:08:43.918181    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:43.976888    8602 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:08:43.976925    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:08:44.084641    8602 logs.go:123] Gathering logs for kubelet ...
	I0717 00:08:44.084678    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:08:44.182562    8602 logs.go:123] Gathering logs for dmesg ...
	I0717 00:08:44.182597    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:08:44.196547    8602 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:08:44.196574    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:08:44.363438    8602 logs.go:123] Gathering logs for kube-scheduler [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd] ...
	I0717 00:08:44.363465    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:46.918243    8602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:08:46.931654    8602 api_server.go:72] duration metric: took 1m45.30722339s to wait for apiserver process to appear ...
	I0717 00:08:46.931682    8602 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:08:46.931716    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:08:46.931776    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:08:46.971787    8602 cri.go:89] found id: "84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:46.971812    8602 cri.go:89] found id: ""
	I0717 00:08:46.971821    8602 logs.go:276] 1 containers: [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1]
	I0717 00:08:46.971876    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:46.975560    8602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:08:46.975668    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:08:47.015622    8602 cri.go:89] found id: "be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:47.015642    8602 cri.go:89] found id: ""
	I0717 00:08:47.015650    8602 logs.go:276] 1 containers: [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39]
	I0717 00:08:47.015705    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.019106    8602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:08:47.019178    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:08:47.057979    8602 cri.go:89] found id: "33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:47.058002    8602 cri.go:89] found id: ""
	I0717 00:08:47.058010    8602 logs.go:276] 1 containers: [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21]
	I0717 00:08:47.058066    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.061727    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:08:47.061843    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:08:47.107057    8602 cri.go:89] found id: "eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:47.107080    8602 cri.go:89] found id: ""
	I0717 00:08:47.107089    8602 logs.go:276] 1 containers: [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd]
	I0717 00:08:47.107148    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.110574    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:08:47.110641    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:08:47.151779    8602 cri.go:89] found id: "f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:47.151810    8602 cri.go:89] found id: ""
	I0717 00:08:47.151819    8602 logs.go:276] 1 containers: [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a]
	I0717 00:08:47.151872    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.155501    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:08:47.155619    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:08:47.196072    8602 cri.go:89] found id: "8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:47.196095    8602 cri.go:89] found id: ""
	I0717 00:08:47.196103    8602 logs.go:276] 1 containers: [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883]
	I0717 00:08:47.196158    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.199729    8602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:08:47.199799    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:08:47.239714    8602 cri.go:89] found id: "8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:47.239737    8602 cri.go:89] found id: ""
	I0717 00:08:47.239745    8602 logs.go:276] 1 containers: [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e]
	I0717 00:08:47.239816    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:47.243303    8602 logs.go:123] Gathering logs for kubelet ...
	I0717 00:08:47.243330    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:08:47.335189    8602 logs.go:123] Gathering logs for dmesg ...
	I0717 00:08:47.335226    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:08:47.348041    8602 logs.go:123] Gathering logs for kube-scheduler [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd] ...
	I0717 00:08:47.348076    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:47.403600    8602 logs.go:123] Gathering logs for kindnet [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e] ...
	I0717 00:08:47.403635    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:47.449666    8602 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:08:47.449696    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:08:47.548623    8602 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:08:47.548657    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:08:47.684078    8602 logs.go:123] Gathering logs for kube-apiserver [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1] ...
	I0717 00:08:47.684106    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:47.749877    8602 logs.go:123] Gathering logs for etcd [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39] ...
	I0717 00:08:47.749909    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:47.805556    8602 logs.go:123] Gathering logs for coredns [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21] ...
	I0717 00:08:47.805587    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:47.849455    8602 logs.go:123] Gathering logs for kube-proxy [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a] ...
	I0717 00:08:47.849483    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:47.885552    8602 logs.go:123] Gathering logs for kube-controller-manager [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883] ...
	I0717 00:08:47.885625    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:47.958289    8602 logs.go:123] Gathering logs for container status ...
	I0717 00:08:47.958324    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:08:50.512129    8602 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 00:08:50.521633    8602 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 00:08:50.522658    8602 api_server.go:141] control plane version: v1.30.2
	I0717 00:08:50.522689    8602 api_server.go:131] duration metric: took 3.590999725s to wait for apiserver health ...
	I0717 00:08:50.522698    8602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:08:50.522730    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:08:50.522820    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:08:50.561758    8602 cri.go:89] found id: "84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:50.561780    8602 cri.go:89] found id: ""
	I0717 00:08:50.561788    8602 logs.go:276] 1 containers: [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1]
	I0717 00:08:50.561845    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.565306    8602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:08:50.565379    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:08:50.605124    8602 cri.go:89] found id: "be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:50.605194    8602 cri.go:89] found id: ""
	I0717 00:08:50.605220    8602 logs.go:276] 1 containers: [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39]
	I0717 00:08:50.605301    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.608800    8602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:08:50.608870    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:08:50.646243    8602 cri.go:89] found id: "33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:50.646267    8602 cri.go:89] found id: ""
	I0717 00:08:50.646275    8602 logs.go:276] 1 containers: [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21]
	I0717 00:08:50.646329    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.649736    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:08:50.649803    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:08:50.688242    8602 cri.go:89] found id: "eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:50.688264    8602 cri.go:89] found id: ""
	I0717 00:08:50.688273    8602 logs.go:276] 1 containers: [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd]
	I0717 00:08:50.688328    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.691874    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:08:50.691994    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:08:50.729695    8602 cri.go:89] found id: "f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:50.729717    8602 cri.go:89] found id: ""
	I0717 00:08:50.729724    8602 logs.go:276] 1 containers: [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a]
	I0717 00:08:50.729789    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.733191    8602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:08:50.733258    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:08:50.782851    8602 cri.go:89] found id: "8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:50.782872    8602 cri.go:89] found id: ""
	I0717 00:08:50.782880    8602 logs.go:276] 1 containers: [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883]
	I0717 00:08:50.782934    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.786841    8602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:08:50.786911    8602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:08:50.824534    8602 cri.go:89] found id: "8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:50.824605    8602 cri.go:89] found id: ""
	I0717 00:08:50.824621    8602 logs.go:276] 1 containers: [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e]
	I0717 00:08:50.824688    8602 ssh_runner.go:195] Run: which crictl
	I0717 00:08:50.828069    8602 logs.go:123] Gathering logs for kube-apiserver [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1] ...
	I0717 00:08:50.828172    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1"
	I0717 00:08:50.898105    8602 logs.go:123] Gathering logs for coredns [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21] ...
	I0717 00:08:50.898140    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21"
	I0717 00:08:50.941620    8602 logs.go:123] Gathering logs for kube-proxy [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a] ...
	I0717 00:08:50.941645    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a"
	I0717 00:08:50.979078    8602 logs.go:123] Gathering logs for container status ...
	I0717 00:08:50.979104    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:08:51.023447    8602 logs.go:123] Gathering logs for dmesg ...
	I0717 00:08:51.023478    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:08:51.035793    8602 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:08:51.035819    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:08:51.176202    8602 logs.go:123] Gathering logs for etcd [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39] ...
	I0717 00:08:51.176233    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39"
	I0717 00:08:51.224401    8602 logs.go:123] Gathering logs for kube-scheduler [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd] ...
	I0717 00:08:51.224432    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd"
	I0717 00:08:51.270906    8602 logs.go:123] Gathering logs for kube-controller-manager [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883] ...
	I0717 00:08:51.270941    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883"
	I0717 00:08:51.360049    8602 logs.go:123] Gathering logs for kindnet [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e] ...
	I0717 00:08:51.360084    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e"
	I0717 00:08:51.405552    8602 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:08:51.405585    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:08:51.497152    8602 logs.go:123] Gathering logs for kubelet ...
	I0717 00:08:51.497189    8602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:08:54.103936    8602 system_pods.go:59] 18 kube-system pods found
	I0717 00:08:54.103976    8602 system_pods.go:61] "coredns-7db6d8ff4d-p58r6" [95609ac0-378e-4169-a21c-a18fd2036b08] Running
	I0717 00:08:54.103984    8602 system_pods.go:61] "csi-hostpath-attacher-0" [65e41986-2d2f-471c-8d19-a5620abf95b6] Running
	I0717 00:08:54.103990    8602 system_pods.go:61] "csi-hostpath-resizer-0" [3e88069a-e672-4a74-87bc-e1a71f52778a] Running
	I0717 00:08:54.103995    8602 system_pods.go:61] "csi-hostpathplugin-xkhk8" [d88c0ab7-3d63-42dd-a9b2-c2192513a989] Running
	I0717 00:08:54.104000    8602 system_pods.go:61] "etcd-addons-579136" [78bbc9c5-4bcb-4d26-9573-566c4362c019] Running
	I0717 00:08:54.104006    8602 system_pods.go:61] "kindnet-nv8dn" [5596281d-4baf-4082-b2ad-fe1547266b35] Running
	I0717 00:08:54.104010    8602 system_pods.go:61] "kube-apiserver-addons-579136" [e11fdf68-2d03-4ce4-b284-22039a659cf1] Running
	I0717 00:08:54.104014    8602 system_pods.go:61] "kube-controller-manager-addons-579136" [030d9096-10c3-47c9-aeab-5c8edde94b8d] Running
	I0717 00:08:54.104019    8602 system_pods.go:61] "kube-ingress-dns-minikube" [71c59a88-c4ab-4c05-9b67-191701cdb616] Running
	I0717 00:08:54.104023    8602 system_pods.go:61] "kube-proxy-b7z7h" [40503070-a17c-4e76-9aaf-c1157a0270ad] Running
	I0717 00:08:54.104028    8602 system_pods.go:61] "kube-scheduler-addons-579136" [178f3333-59ee-41b7-8f2b-bb2da614cbfe] Running
	I0717 00:08:54.104035    8602 system_pods.go:61] "metrics-server-c59844bb4-hqndr" [fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62] Running
	I0717 00:08:54.104040    8602 system_pods.go:61] "nvidia-device-plugin-daemonset-r42hf" [3055b027-1414-4787-9a8e-0a95e312c842] Running
	I0717 00:08:54.104047    8602 system_pods.go:61] "registry-9j5kz" [98b03ce8-0e8f-459e-860d-ffeebb54febc] Running
	I0717 00:08:54.104052    8602 system_pods.go:61] "registry-proxy-qckjd" [7a2c2cd9-4d11-44d7-a720-59b20cb1e5c7] Running
	I0717 00:08:54.104056    8602 system_pods.go:61] "snapshot-controller-745499f584-gvc85" [f2399c6b-60ad-432b-bb19-c442a6da83fc] Running
	I0717 00:08:54.104061    8602 system_pods.go:61] "snapshot-controller-745499f584-j5445" [8bcc60b5-f2c2-4b0d-97d6-0527fed9e203] Running
	I0717 00:08:54.104065    8602 system_pods.go:61] "storage-provisioner" [e06986b3-ed58-46c2-8c17-4b63bd9656e6] Running
	I0717 00:08:54.104072    8602 system_pods.go:74] duration metric: took 3.581367726s to wait for pod list to return data ...
	I0717 00:08:54.104088    8602 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:08:54.106731    8602 default_sa.go:45] found service account: "default"
	I0717 00:08:54.106777    8602 default_sa.go:55] duration metric: took 2.662289ms for default service account to be created ...
	I0717 00:08:54.106790    8602 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:08:54.117049    8602 system_pods.go:86] 18 kube-system pods found
	I0717 00:08:54.117082    8602 system_pods.go:89] "coredns-7db6d8ff4d-p58r6" [95609ac0-378e-4169-a21c-a18fd2036b08] Running
	I0717 00:08:54.117090    8602 system_pods.go:89] "csi-hostpath-attacher-0" [65e41986-2d2f-471c-8d19-a5620abf95b6] Running
	I0717 00:08:54.117095    8602 system_pods.go:89] "csi-hostpath-resizer-0" [3e88069a-e672-4a74-87bc-e1a71f52778a] Running
	I0717 00:08:54.117100    8602 system_pods.go:89] "csi-hostpathplugin-xkhk8" [d88c0ab7-3d63-42dd-a9b2-c2192513a989] Running
	I0717 00:08:54.117104    8602 system_pods.go:89] "etcd-addons-579136" [78bbc9c5-4bcb-4d26-9573-566c4362c019] Running
	I0717 00:08:54.117108    8602 system_pods.go:89] "kindnet-nv8dn" [5596281d-4baf-4082-b2ad-fe1547266b35] Running
	I0717 00:08:54.117113    8602 system_pods.go:89] "kube-apiserver-addons-579136" [e11fdf68-2d03-4ce4-b284-22039a659cf1] Running
	I0717 00:08:54.117118    8602 system_pods.go:89] "kube-controller-manager-addons-579136" [030d9096-10c3-47c9-aeab-5c8edde94b8d] Running
	I0717 00:08:54.117121    8602 system_pods.go:89] "kube-ingress-dns-minikube" [71c59a88-c4ab-4c05-9b67-191701cdb616] Running
	I0717 00:08:54.117126    8602 system_pods.go:89] "kube-proxy-b7z7h" [40503070-a17c-4e76-9aaf-c1157a0270ad] Running
	I0717 00:08:54.117130    8602 system_pods.go:89] "kube-scheduler-addons-579136" [178f3333-59ee-41b7-8f2b-bb2da614cbfe] Running
	I0717 00:08:54.117135    8602 system_pods.go:89] "metrics-server-c59844bb4-hqndr" [fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62] Running
	I0717 00:08:54.117141    8602 system_pods.go:89] "nvidia-device-plugin-daemonset-r42hf" [3055b027-1414-4787-9a8e-0a95e312c842] Running
	I0717 00:08:54.117145    8602 system_pods.go:89] "registry-9j5kz" [98b03ce8-0e8f-459e-860d-ffeebb54febc] Running
	I0717 00:08:54.117159    8602 system_pods.go:89] "registry-proxy-qckjd" [7a2c2cd9-4d11-44d7-a720-59b20cb1e5c7] Running
	I0717 00:08:54.117164    8602 system_pods.go:89] "snapshot-controller-745499f584-gvc85" [f2399c6b-60ad-432b-bb19-c442a6da83fc] Running
	I0717 00:08:54.117168    8602 system_pods.go:89] "snapshot-controller-745499f584-j5445" [8bcc60b5-f2c2-4b0d-97d6-0527fed9e203] Running
	I0717 00:08:54.117175    8602 system_pods.go:89] "storage-provisioner" [e06986b3-ed58-46c2-8c17-4b63bd9656e6] Running
	I0717 00:08:54.117191    8602 system_pods.go:126] duration metric: took 10.395642ms to wait for k8s-apps to be running ...
	I0717 00:08:54.117199    8602 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:08:54.117258    8602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:08:54.129931    8602 system_svc.go:56] duration metric: took 12.723135ms WaitForService to wait for kubelet
	I0717 00:08:54.129963    8602 kubeadm.go:582] duration metric: took 1m52.505539059s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:08:54.129984    8602 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:08:54.133092    8602 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 00:08:54.133127    8602 node_conditions.go:123] node cpu capacity is 2
	I0717 00:08:54.133139    8602 node_conditions.go:105] duration metric: took 3.150518ms to run NodePressure ...
	I0717 00:08:54.133153    8602 start.go:241] waiting for startup goroutines ...
	I0717 00:08:54.133160    8602 start.go:246] waiting for cluster config update ...
	I0717 00:08:54.133181    8602 start.go:255] writing updated cluster config ...
	I0717 00:08:54.133483    8602 ssh_runner.go:195] Run: rm -f paused
	I0717 00:08:54.455301    8602 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:08:54.459586    8602 out.go:177] * Done! kubectl is now configured to use "addons-579136" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.474912497Z" level=info msg="Stopped pod sandbox (already stopped): b2177e92f7aa014badfbb161839316142dacdbd76b35f980f8705981130bebec" id=80bc0ff2-2c5d-49c7-9cc8-37a1d3da2e97 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.475253219Z" level=info msg="Removing pod sandbox: b2177e92f7aa014badfbb161839316142dacdbd76b35f980f8705981130bebec" id=1cb6111e-dc77-4552-9a90-3cbc409fa16a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.483271373Z" level=info msg="Removed pod sandbox: b2177e92f7aa014badfbb161839316142dacdbd76b35f980f8705981130bebec" id=1cb6111e-dc77-4552-9a90-3cbc409fa16a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.531979335Z" level=info msg="Stopped container 7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a: ingress-nginx/ingress-nginx-controller-768f948f8f-99g7h/controller" id=c4a4bca5-38d5-46ee-be1e-802d0769da7f name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.532519876Z" level=info msg="Stopping pod sandbox: 7d488a0d17d123f28486a990d183d1eecd0d08eb14eccda2bd773561855eba90" id=e6d08a30-106a-48f2-92d2-4adeb0765747 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.536237985Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-M7IQKSKMTBP5O5UC - [0:0]\n:KUBE-HP-XGOT7NAVQAOHKVVK - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-XGOT7NAVQAOHKVVK\n-X KUBE-HP-M7IQKSKMTBP5O5UC\nCOMMIT\n"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.550701247Z" level=info msg="Closing host port tcp:80"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.550755075Z" level=info msg="Closing host port tcp:443"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.552484844Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.552511807Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.552683087Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-768f948f8f-99g7h Namespace:ingress-nginx ID:7d488a0d17d123f28486a990d183d1eecd0d08eb14eccda2bd773561855eba90 UID:d651a115-e23d-4adf-9987-ebb248d4c190 NetNS:/var/run/netns/0a02538d-6f41-4f09-9921-40e1f2643736 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.552828701Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-768f948f8f-99g7h from CNI network \"kindnet\" (type=ptp)"
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.582642003Z" level=info msg="Stopped pod sandbox: 7d488a0d17d123f28486a990d183d1eecd0d08eb14eccda2bd773561855eba90" id=e6d08a30-106a-48f2-92d2-4adeb0765747 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.686651751Z" level=info msg="Removing container: 7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a" id=453580dc-d513-4c81-bc7d-e82340fd2435 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 00:12:48 addons-579136 crio[966]: time="2024-07-17 00:12:48.702699391Z" level=info msg="Removed container 7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a: ingress-nginx/ingress-nginx-controller-768f948f8f-99g7h/controller" id=453580dc-d513-4c81-bc7d-e82340fd2435 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 00:13:48 addons-579136 crio[966]: time="2024-07-17 00:13:48.485893097Z" level=info msg="Stopping pod sandbox: 7d488a0d17d123f28486a990d183d1eecd0d08eb14eccda2bd773561855eba90" id=1f79eeeb-499d-472f-97e9-7602e066145f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:13:48 addons-579136 crio[966]: time="2024-07-17 00:13:48.485942822Z" level=info msg="Stopped pod sandbox (already stopped): 7d488a0d17d123f28486a990d183d1eecd0d08eb14eccda2bd773561855eba90" id=1f79eeeb-499d-472f-97e9-7602e066145f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:13:48 addons-579136 crio[966]: time="2024-07-17 00:13:48.486498051Z" level=info msg="Removing pod sandbox: 7d488a0d17d123f28486a990d183d1eecd0d08eb14eccda2bd773561855eba90" id=d011ac44-dcac-41f9-9490-bd84d1e2b0be name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:13:48 addons-579136 crio[966]: time="2024-07-17 00:13:48.494479648Z" level=info msg="Removed pod sandbox: 7d488a0d17d123f28486a990d183d1eecd0d08eb14eccda2bd773561855eba90" id=d011ac44-dcac-41f9-9490-bd84d1e2b0be name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:16:16 addons-579136 crio[966]: time="2024-07-17 00:16:16.767087071Z" level=info msg="Stopping container: 2dc85b71327ddc104480cf081cf98a37bdcaeafd906b9a35fb1dfb3850adc837 (timeout: 30s)" id=3548bdf5-d89c-46c8-b730-17ec8b531a63 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 00:16:17 addons-579136 crio[966]: time="2024-07-17 00:16:17.925847021Z" level=info msg="Stopped container 2dc85b71327ddc104480cf081cf98a37bdcaeafd906b9a35fb1dfb3850adc837: kube-system/metrics-server-c59844bb4-hqndr/metrics-server" id=3548bdf5-d89c-46c8-b730-17ec8b531a63 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 00:16:17 addons-579136 crio[966]: time="2024-07-17 00:16:17.926663291Z" level=info msg="Stopping pod sandbox: 5efc11cb2e3a24a406631ec4af91c1810a1642a2cab3d542df04f6ebb2e72c2b" id=4ca6792c-5b6a-4d9a-8845-30af7bd52f98 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:16:17 addons-579136 crio[966]: time="2024-07-17 00:16:17.927010832Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-hqndr Namespace:kube-system ID:5efc11cb2e3a24a406631ec4af91c1810a1642a2cab3d542df04f6ebb2e72c2b UID:fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62 NetNS:/var/run/netns/f38cd8a8-9b78-4ef4-8d6b-e8fccdd05571 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 00:16:17 addons-579136 crio[966]: time="2024-07-17 00:16:17.927146846Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-hqndr from CNI network \"kindnet\" (type=ptp)"
	Jul 17 00:16:17 addons-579136 crio[966]: time="2024-07-17 00:16:17.953042799Z" level=info msg="Stopped pod sandbox: 5efc11cb2e3a24a406631ec4af91c1810a1642a2cab3d542df04f6ebb2e72c2b" id=4ca6792c-5b6a-4d9a-8845-30af7bd52f98 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ab1d5f3067a11       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   4c6bd0d427be0       hello-world-app-6778b5fc9f-kxhh8
	a4c547da00f93       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         5 minutes ago       Running             nginx                     0                   bcf86b42b257d       nginx
	c62bfa850a7fb       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   7 minutes ago       Running             headlamp                  0                   efb56325b9000       headlamp-7867546754-fq62p
	b571fbe936b7c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            7 minutes ago       Running             gcp-auth                  0                   d2bde78635b59       gcp-auth-5db96cd9b4-hth2l
	48ac120f6ea13       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                         8 minutes ago       Running             yakd                      0                   b9e2bf010791f       yakd-dashboard-799879c74f-r64g4
	2dc85b71327dd       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   5efc11cb2e3a2       metrics-server-c59844bb4-hqndr
	d05ed15daf27e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   faae55f65a262       storage-provisioner
	33f540e04476e       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago       Running             coredns                   0                   be3266b7f5913       coredns-7db6d8ff4d-p58r6
	8c23c06f4e4c8       docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493                      9 minutes ago       Running             kindnet-cni               0                   81856d966fb7a       kindnet-nv8dn
	f597b2cda48f6       66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae                                                        9 minutes ago       Running             kube-proxy                0                   7d8ae7d2389b3       kube-proxy-b7z7h
	eec7d7d9059cb       c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5                                                        9 minutes ago       Running             kube-scheduler            0                   ffa811be62a9a       kube-scheduler-addons-579136
	8428206521ac5       e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567                                                        9 minutes ago       Running             kube-controller-manager   0                   3f6cd66e774e4       kube-controller-manager-addons-579136
	be2ec7b44daeb       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        9 minutes ago       Running             etcd                      0                   828c8d42490e0       etcd-addons-579136
	84aa8590a287f       84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0                                                        9 minutes ago       Running             kube-apiserver            0                   e652878d1fc0c       kube-apiserver-addons-579136
	
	
	==> coredns [33f540e04476e771a19482fe4afe1055690e71f730fff2f7c3521276fd315a21] <==
	[INFO] 10.244.0.12:42933 - 3590 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002674955s
	[INFO] 10.244.0.12:42665 - 6894 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011142s
	[INFO] 10.244.0.12:42665 - 40939 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143077s
	[INFO] 10.244.0.12:40110 - 40228 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000106703s
	[INFO] 10.244.0.12:40110 - 40225 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000157535s
	[INFO] 10.244.0.12:52982 - 38654 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059219s
	[INFO] 10.244.0.12:52982 - 51699 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050094s
	[INFO] 10.244.0.12:35738 - 18054 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004362s
	[INFO] 10.244.0.12:35738 - 30852 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061499s
	[INFO] 10.244.0.12:43813 - 25020 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001345363s
	[INFO] 10.244.0.12:43813 - 35234 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001404713s
	[INFO] 10.244.0.12:33019 - 13528 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000075259s
	[INFO] 10.244.0.12:33019 - 24542 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000054353s
	[INFO] 10.244.0.20:50705 - 41399 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002272195s
	[INFO] 10.244.0.20:36043 - 64044 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002464242s
	[INFO] 10.244.0.20:45241 - 58361 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140107s
	[INFO] 10.244.0.20:49194 - 19126 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000065298s
	[INFO] 10.244.0.20:51582 - 45119 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114309s
	[INFO] 10.244.0.20:54636 - 34517 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111568s
	[INFO] 10.244.0.20:39637 - 1710 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004920222s
	[INFO] 10.244.0.20:54995 - 11120 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005614685s
	[INFO] 10.244.0.20:49724 - 41375 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001574409s
	[INFO] 10.244.0.20:53984 - 53524 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000822492s
	[INFO] 10.244.0.22:40141 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000188117s
	[INFO] 10.244.0.22:40456 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000099597s
	
	
	==> describe nodes <==
	Name:               addons-579136
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-579136
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=addons-579136
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_06_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-579136
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:06:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-579136
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:16:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:12:55 +0000   Wed, 17 Jul 2024 00:06:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:12:55 +0000   Wed, 17 Jul 2024 00:06:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:12:55 +0000   Wed, 17 Jul 2024 00:06:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:12:55 +0000   Wed, 17 Jul 2024 00:07:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-579136
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 95f4effa184244a7a48b8cd1f484601f
	  System UUID:                0dff8d19-ba0c-4a39-bf5a-66328e26eb1a
	  Boot ID:                    a28e50e2-5a2a-4346-aa05-4284fb20291b
	  Kernel Version:             5.15.0-1064-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-kxhh8         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  gcp-auth                    gcp-auth-5db96cd9b4-hth2l                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  headlamp                    headlamp-7867546754-fq62p                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 coredns-7db6d8ff4d-p58r6                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m15s
	  kube-system                 etcd-addons-579136                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m30s
	  kube-system                 kindnet-nv8dn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m16s
	  kube-system                 kube-apiserver-addons-579136             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 kube-controller-manager-addons-579136    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 kube-proxy-b7z7h                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-scheduler-addons-579136             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  yakd-dashboard              yakd-dashboard-799879c74f-r64g4          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m31s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m30s  kubelet          Node addons-579136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m30s  kubelet          Node addons-579136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m30s  kubelet          Node addons-579136 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m17s  node-controller  Node addons-579136 event: Registered Node addons-579136 in Controller
	  Normal  NodeReady                8m59s  kubelet          Node addons-579136 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul16 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015067] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.476271] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.061374] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002702] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017886] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004642] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003674] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.659967] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.278169] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [be2ec7b44daebb438a312968b1930ea532d1420680b4a7baeaca00c2cbf72e39] <==
	{"level":"info","ts":"2024-07-17T00:06:42.943081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-17T00:06:42.94313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-17T00:06:42.943164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-17T00:06:42.946464Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:06:42.951018Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-579136 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T00:06:42.951196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:06:42.952964Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-17T00:06:42.955122Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:06:42.955375Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:06:42.955453Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:06:42.955148Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:06:42.963855Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T00:06:42.963941Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T00:06:42.99533Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-17T00:07:02.423627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.245989ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128030573269489984 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kube-proxy-669fc44fbc\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kube-proxy-669fc44fbc\" value_size:2056 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T00:07:02.42609Z","caller":"traceutil/trace.go:171","msg":"trace[191558539] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"134.887529ms","start":"2024-07-17T00:07:02.29117Z","end":"2024-07-17T00:07:02.426057Z","steps":["trace[191558539] 'process raft request'  (duration: 31.721331ms)","trace[191558539] 'compare'  (duration: 100.109821ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:07:02.426708Z","caller":"traceutil/trace.go:171","msg":"trace[933581596] linearizableReadLoop","detail":"{readStateIndex:337; appliedIndex:334; }","duration":"101.300716ms","start":"2024-07-17T00:07:02.325396Z","end":"2024-07-17T00:07:02.426697Z","steps":["trace[933581596] 'read index received'  (duration: 22.641252ms)","trace[933581596] 'applied index is now lower than readState.Index'  (duration: 78.658668ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:07:02.427594Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.548239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-node-lease/\" range_end:\"/registry/serviceaccounts/kube-node-lease0\" ","response":"range_response_count:1 size:187"}
	{"level":"info","ts":"2024-07-17T00:07:02.430647Z","caller":"traceutil/trace.go:171","msg":"trace[172502781] range","detail":"{range_begin:/registry/serviceaccounts/kube-node-lease/; range_end:/registry/serviceaccounts/kube-node-lease0; response_count:1; response_revision:327; }","duration":"105.24439ms","start":"2024-07-17T00:07:02.325392Z","end":"2024-07-17T00:07:02.430636Z","steps":["trace[172502781] 'agreement among raft nodes before linearized reading'  (duration: 101.496168ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:02.430455Z","caller":"traceutil/trace.go:171","msg":"trace[1608852876] transaction","detail":"{read_only:false; response_revision:326; number_of_response:1; }","duration":"138.984159ms","start":"2024-07-17T00:07:02.291457Z","end":"2024-07-17T00:07:02.430441Z","steps":["trace[1608852876] 'process raft request'  (duration: 132.473821ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:02.430562Z","caller":"traceutil/trace.go:171","msg":"trace[722345133] transaction","detail":"{read_only:false; response_revision:327; number_of_response:1; }","duration":"107.836974ms","start":"2024-07-17T00:07:02.322719Z","end":"2024-07-17T00:07:02.430556Z","steps":["trace[722345133] 'process raft request'  (duration: 101.297819ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:02.533235Z","caller":"traceutil/trace.go:171","msg":"trace[1596833142] transaction","detail":"{read_only:false; response_revision:328; number_of_response:1; }","duration":"106.533223ms","start":"2024-07-17T00:07:02.426646Z","end":"2024-07-17T00:07:02.533179Z","steps":["trace[1596833142] 'process raft request'  (duration: 75.375101ms)","trace[1596833142] 'store kv pair into bolt db' {req_type:put; key:/registry/pods/kube-system/coredns-7db6d8ff4d-t4wpd; req_size:3502; } (duration: 14.051443ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:07:03.760442Z","caller":"traceutil/trace.go:171","msg":"trace[1737087953] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"132.01645ms","start":"2024-07-17T00:07:03.628408Z","end":"2024-07-17T00:07:03.760424Z","steps":["trace[1737087953] 'process raft request'  (duration: 100.641828ms)","trace[1737087953] 'compare'  (duration: 30.951963ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:07:03.792217Z","caller":"traceutil/trace.go:171","msg":"trace[1036976757] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"163.691068ms","start":"2024-07-17T00:07:03.628511Z","end":"2024-07-17T00:07:03.792202Z","steps":["trace[1036976757] 'process raft request'  (duration: 131.5916ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:05.587144Z","caller":"traceutil/trace.go:171","msg":"trace[517915260] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"142.107397ms","start":"2024-07-17T00:07:05.445007Z","end":"2024-07-17T00:07:05.587114Z","steps":["trace[517915260] 'process raft request'  (duration: 20.16383ms)"],"step_count":1}
	
	
	==> gcp-auth [b571fbe936b7c1b1b4b4b7a5bcd9aa9d2260e3bb0f8c27d9cf00054905a69996] <==
	2024/07/17 00:08:32 GCP Auth Webhook started!
	2024/07/17 00:08:55 Ready to marshal response ...
	2024/07/17 00:08:55 Ready to write response ...
	2024/07/17 00:08:55 Ready to marshal response ...
	2024/07/17 00:08:55 Ready to write response ...
	2024/07/17 00:08:55 Ready to marshal response ...
	2024/07/17 00:08:55 Ready to write response ...
	2024/07/17 00:09:06 Ready to marshal response ...
	2024/07/17 00:09:06 Ready to write response ...
	2024/07/17 00:09:13 Ready to marshal response ...
	2024/07/17 00:09:13 Ready to write response ...
	2024/07/17 00:09:13 Ready to marshal response ...
	2024/07/17 00:09:13 Ready to write response ...
	2024/07/17 00:09:23 Ready to marshal response ...
	2024/07/17 00:09:23 Ready to write response ...
	2024/07/17 00:09:35 Ready to marshal response ...
	2024/07/17 00:09:35 Ready to write response ...
	2024/07/17 00:10:06 Ready to marshal response ...
	2024/07/17 00:10:06 Ready to write response ...
	2024/07/17 00:10:23 Ready to marshal response ...
	2024/07/17 00:10:23 Ready to write response ...
	2024/07/17 00:12:42 Ready to marshal response ...
	2024/07/17 00:12:42 Ready to write response ...
	
	
	==> kernel <==
	 00:16:18 up 58 min,  0 users,  load average: 0.21, 0.39, 0.35
	Linux addons-579136 5.15.0-1064-aws #70~20.04.1-Ubuntu SMP Thu Jun 27 14:52:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [8c23c06f4e4c8e5e48eec8d40bce33d44d4c120fa35bc39efd73523bc92cea7e] <==
	I0717 00:15:08.817272       1 main.go:303] handling current node
	I0717 00:15:18.817655       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:15:18.817782       1 main.go:303] handling current node
	W0717 00:15:19.311661       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0717 00:15:19.311696       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0717 00:15:23.664771       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:15:23.664803       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0717 00:15:28.816757       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:15:28.816794       1 main.go:303] handling current node
	W0717 00:15:29.182886       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:15:29.182922       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0717 00:15:38.817749       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:15:38.817783       1 main.go:303] handling current node
	I0717 00:15:48.817744       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:15:48.817795       1 main.go:303] handling current node
	I0717 00:15:58.817423       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:15:58.817463       1 main.go:303] handling current node
	W0717 00:16:03.959720       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0717 00:16:03.959761       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0717 00:16:06.086058       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:16:06.086090       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0717 00:16:08.817691       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:16:08.817726       1 main.go:303] handling current node
	W0717 00:16:14.991737       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:16:14.991771       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kube-apiserver [84aa8590a287fde18b579c75c4bdb93911ca13210454018f7622e9db13ce3ad1] <==
	E0717 00:08:43.002287       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.101.247:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.101.247:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.101.247:443: connect: connection refused
	E0717 00:08:43.007748       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.101.247:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.101.247:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.101.247:443: connect: connection refused
	I0717 00:08:43.094329       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 00:08:55.363172       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.233.242"}
	E0717 00:09:24.728314       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:09:24.744065       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:09:24.761047       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:09:39.766954       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0717 00:09:46.780971       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0717 00:10:13.606286       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 00:10:14.648273       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 00:10:22.986395       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:10:22.986545       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:10:23.021995       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:10:23.022133       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:10:23.046350       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:10:23.046404       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:10:23.072418       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:10:23.072539       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:10:23.577872       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 00:10:23.909234       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.216.76"}
	W0717 00:10:24.023449       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 00:10:24.072683       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 00:10:24.089857       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 00:12:43.179745       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.15.92"}
	
	
	==> kube-controller-manager [8428206521ac560258acc725d29fbc068b22b5b0d34f0423f0d33faffb59b883] <==
	W0717 00:14:10.514069       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:14:10.514103       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:14:11.614462       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:14:11.614500       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:14:23.740415       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:14:23.740455       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:14:31.819146       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:14:31.819196       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:15:00.471421       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:15:00.471461       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:15:02.198113       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:15:02.198153       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:15:12.039625       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:15:12.039661       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:15:23.156149       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:15:23.156184       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:15:56.684069       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:15:56.684105       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:15:59.127929       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:15:59.127964       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:15:59.213906       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:15:59.213941       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:16:11.939648       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:16:11.939687       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 00:16:16.743232       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="8.074µs"
	
	
	==> kube-proxy [f597b2cda48f68da88fc2949d9124a2660f94c3075096840af6a8f470cd4d88a] <==
	I0717 00:07:06.995959       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:07:07.175033       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0717 00:07:07.795969       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0717 00:07:07.796015       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:07:08.039048       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0717 00:07:08.048365       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0717 00:07:08.048494       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:07:08.048789       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:07:08.113845       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:07:08.136020       1 config.go:192] "Starting service config controller"
	I0717 00:07:08.136124       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:07:08.142893       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:07:08.142911       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:07:08.143414       1 config.go:319] "Starting node config controller"
	I0717 00:07:08.143432       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:07:08.250914       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:07:08.251518       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:07:08.251581       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [eec7d7d9059cb27f74ed839a75e11e9b19a7caf95c991494a79594d2d61069cd] <==
	W0717 00:06:45.708771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:06:45.709044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 00:06:45.709086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:06:45.709122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:06:45.708810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:06:45.709164       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:06:45.708918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:06:45.709182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:06:45.708930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:06:45.709198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:06:45.708940       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:06:45.709236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:06:45.709025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:06:45.709260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:06:45.711069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:06:45.711148       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:06:46.666813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:06:46.666852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:06:46.725572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:06:46.725607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:06:46.740520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:06:46.740560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:06:46.850187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:06:46.850296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0717 00:06:47.378323       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:12:45 addons-579136 kubelet[1563]: I0717 00:12:45.969592    1563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50903e80-fe30-403b-b7f7-42b2770f1b4b" path="/var/lib/kubelet/pods/50903e80-fe30-403b-b7f7-42b2770f1b4b/volumes"
	Jul 17 00:12:45 addons-579136 kubelet[1563]: I0717 00:12:45.969971    1563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c59a88-c4ab-4c05-9b67-191701cdb616" path="/var/lib/kubelet/pods/71c59a88-c4ab-4c05-9b67-191701cdb616/volumes"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.417369    1563 scope.go:117] "RemoveContainer" containerID="8acd64cce55666d050db2eb97fc00d47366514ad6329ab239f7e82593fd76969"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.436017    1563 scope.go:117] "RemoveContainer" containerID="50b8172017038b30dd4d66d98764922312b52c15e1c5738928b86e91045cca32"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.634607    1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d45pq\" (UniqueName: \"kubernetes.io/projected/d651a115-e23d-4adf-9987-ebb248d4c190-kube-api-access-d45pq\") pod \"d651a115-e23d-4adf-9987-ebb248d4c190\" (UID: \"d651a115-e23d-4adf-9987-ebb248d4c190\") "
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.634660    1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d651a115-e23d-4adf-9987-ebb248d4c190-webhook-cert\") pod \"d651a115-e23d-4adf-9987-ebb248d4c190\" (UID: \"d651a115-e23d-4adf-9987-ebb248d4c190\") "
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.637035    1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d651a115-e23d-4adf-9987-ebb248d4c190-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d651a115-e23d-4adf-9987-ebb248d4c190" (UID: "d651a115-e23d-4adf-9987-ebb248d4c190"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.638129    1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d651a115-e23d-4adf-9987-ebb248d4c190-kube-api-access-d45pq" (OuterVolumeSpecName: "kube-api-access-d45pq") pod "d651a115-e23d-4adf-9987-ebb248d4c190" (UID: "d651a115-e23d-4adf-9987-ebb248d4c190"). InnerVolumeSpecName "kube-api-access-d45pq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.684970    1563 scope.go:117] "RemoveContainer" containerID="7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.703056    1563 scope.go:117] "RemoveContainer" containerID="7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: E0717 00:12:48.703409    1563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a\": container with ID starting with 7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a not found: ID does not exist" containerID="7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.703446    1563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a"} err="failed to get container status \"7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a\": rpc error: code = NotFound desc = could not find container \"7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a\": container with ID starting with 7e823ef767fe7576a636726e1f59923ffc8244134e8c40b7910491b750ba2c8a not found: ID does not exist"
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.734956    1563 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d651a115-e23d-4adf-9987-ebb248d4c190-webhook-cert\") on node \"addons-579136\" DevicePath \"\""
	Jul 17 00:12:48 addons-579136 kubelet[1563]: I0717 00:12:48.734996    1563 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-d45pq\" (UniqueName: \"kubernetes.io/projected/d651a115-e23d-4adf-9987-ebb248d4c190-kube-api-access-d45pq\") on node \"addons-579136\" DevicePath \"\""
	Jul 17 00:12:49 addons-579136 kubelet[1563]: I0717 00:12:49.969266    1563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d651a115-e23d-4adf-9987-ebb248d4c190" path="/var/lib/kubelet/pods/d651a115-e23d-4adf-9987-ebb248d4c190/volumes"
	Jul 17 00:16:18 addons-579136 kubelet[1563]: I0717 00:16:18.089636    1563 scope.go:117] "RemoveContainer" containerID="2dc85b71327ddc104480cf081cf98a37bdcaeafd906b9a35fb1dfb3850adc837"
	Jul 17 00:16:18 addons-579136 kubelet[1563]: I0717 00:16:18.115774    1563 scope.go:117] "RemoveContainer" containerID="2dc85b71327ddc104480cf081cf98a37bdcaeafd906b9a35fb1dfb3850adc837"
	Jul 17 00:16:18 addons-579136 kubelet[1563]: E0717 00:16:18.116167    1563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dc85b71327ddc104480cf081cf98a37bdcaeafd906b9a35fb1dfb3850adc837\": container with ID starting with 2dc85b71327ddc104480cf081cf98a37bdcaeafd906b9a35fb1dfb3850adc837 not found: ID does not exist" containerID="2dc85b71327ddc104480cf081cf98a37bdcaeafd906b9a35fb1dfb3850adc837"
	Jul 17 00:16:18 addons-579136 kubelet[1563]: I0717 00:16:18.116207    1563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dc85b71327ddc104480cf081cf98a37bdcaeafd906b9a35fb1dfb3850adc837"} err="failed to get container status \"2dc85b71327ddc104480cf081cf98a37bdcaeafd906b9a35fb1dfb3850adc837\": rpc error: code = NotFound desc = could not find container \"2dc85b71327ddc104480cf081cf98a37bdcaeafd906b9a35fb1dfb3850adc837\": container with ID starting with 2dc85b71327ddc104480cf081cf98a37bdcaeafd906b9a35fb1dfb3850adc837 not found: ID does not exist"
	Jul 17 00:16:18 addons-579136 kubelet[1563]: I0717 00:16:18.124539    1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62-tmp-dir\") pod \"fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62\" (UID: \"fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62\") "
	Jul 17 00:16:18 addons-579136 kubelet[1563]: I0717 00:16:18.124595    1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xt7x\" (UniqueName: \"kubernetes.io/projected/fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62-kube-api-access-4xt7x\") pod \"fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62\" (UID: \"fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62\") "
	Jul 17 00:16:18 addons-579136 kubelet[1563]: I0717 00:16:18.125212    1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62" (UID: "fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 17 00:16:18 addons-579136 kubelet[1563]: I0717 00:16:18.129815    1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62-kube-api-access-4xt7x" (OuterVolumeSpecName: "kube-api-access-4xt7x") pod "fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62" (UID: "fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62"). InnerVolumeSpecName "kube-api-access-4xt7x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:16:18 addons-579136 kubelet[1563]: I0717 00:16:18.225503    1563 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62-tmp-dir\") on node \"addons-579136\" DevicePath \"\""
	Jul 17 00:16:18 addons-579136 kubelet[1563]: I0717 00:16:18.225543    1563 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4xt7x\" (UniqueName: \"kubernetes.io/projected/fd5ef1e8-e8e0-48cc-b59f-0b5862eb6c62-kube-api-access-4xt7x\") on node \"addons-579136\" DevicePath \"\""
	
	
	==> storage-provisioner [d05ed15daf27e6eb7314cfda07fd569bdde4ce60019af246e67c6daaa3837031] <==
	I0717 00:07:20.535428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:07:20.551620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:07:20.551746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:07:20.562941       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:07:20.563170       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-579136_db04466c-585f-43a5-b0b8-e24129537628!
	I0717 00:07:20.566292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af356d7e-57de-482b-87c5-ff66524bddce", APIVersion:"v1", ResourceVersion:"886", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-579136_db04466c-585f-43a5-b0b8-e24129537628 became leader
	I0717 00:07:20.664545       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-579136_db04466c-585f-43a5-b0b8-e24129537628!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-579136 -n addons-579136
helpers_test.go:261: (dbg) Run:  kubectl --context addons-579136 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (360.48s)

                                                
                                    

Test pass (301/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.67
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.2/json-events 7.39
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.07
18 TestDownloadOnly/v1.30.2/DeleteAll 0.2
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 15.26
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.53
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 175.72
38 TestAddons/parallel/Registry 16.47
40 TestAddons/parallel/InspektorGadget 11.83
44 TestAddons/parallel/CSI 60.15
45 TestAddons/parallel/Headlamp 13.96
46 TestAddons/parallel/CloudSpanner 5.62
47 TestAddons/parallel/LocalPath 53.64
48 TestAddons/parallel/NvidiaDevicePlugin 6.51
49 TestAddons/parallel/Yakd 5
53 TestAddons/serial/GCPAuth/Namespaces 0.17
54 TestAddons/StoppedEnableDisable 12.14
55 TestCertOptions 39.22
56 TestCertExpiration 237.61
58 TestForceSystemdFlag 39.88
59 TestForceSystemdEnv 44.79
65 TestErrorSpam/setup 34.01
66 TestErrorSpam/start 0.73
67 TestErrorSpam/status 0.94
68 TestErrorSpam/pause 1.67
69 TestErrorSpam/unpause 1.68
70 TestErrorSpam/stop 1.39
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 60.75
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 36.53
77 TestFunctional/serial/KubeContext 0.08
78 TestFunctional/serial/KubectlGetPods 0.1
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.89
82 TestFunctional/serial/CacheCmd/cache/add_local 0.98
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.07
87 TestFunctional/serial/CacheCmd/cache/delete 0.11
88 TestFunctional/serial/MinikubeKubectlCmd 0.13
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
90 TestFunctional/serial/ExtraConfig 294.44
91 TestFunctional/serial/ComponentHealth 0.1
92 TestFunctional/serial/LogsCmd 1.69
93 TestFunctional/serial/LogsFileCmd 1.74
94 TestFunctional/serial/InvalidService 4.85
96 TestFunctional/parallel/ConfigCmd 0.5
97 TestFunctional/parallel/DashboardCmd 13.45
98 TestFunctional/parallel/DryRun 0.39
99 TestFunctional/parallel/InternationalLanguage 0.2
100 TestFunctional/parallel/StatusCmd 0.98
104 TestFunctional/parallel/ServiceCmdConnect 6.56
105 TestFunctional/parallel/AddonsCmd 0.13
106 TestFunctional/parallel/PersistentVolumeClaim 26.18
108 TestFunctional/parallel/SSHCmd 0.52
109 TestFunctional/parallel/CpCmd 1.88
111 TestFunctional/parallel/FileSync 0.36
112 TestFunctional/parallel/CertSync 2.07
116 TestFunctional/parallel/NodeLabels 0.08
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
120 TestFunctional/parallel/License 0.36
121 TestFunctional/parallel/Version/short 0.08
122 TestFunctional/parallel/Version/components 1.17
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
127 TestFunctional/parallel/ImageCommands/ImageBuild 3.39
128 TestFunctional/parallel/ImageCommands/Setup 0.8
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.75
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
134 TestFunctional/parallel/ServiceCmd/DeployApp 10.29
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.89
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.35
145 TestFunctional/parallel/ServiceCmd/List 0.33
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
148 TestFunctional/parallel/ServiceCmd/Format 0.37
149 TestFunctional/parallel/ServiceCmd/URL 0.35
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
151 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
157 TestFunctional/parallel/ProfileCmd/profile_list 0.39
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
159 TestFunctional/parallel/MountCmd/any-port 6.95
160 TestFunctional/parallel/MountCmd/specific-port 1.95
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.88
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMultiControlPlane/serial/StartCluster 182.45
169 TestMultiControlPlane/serial/DeployApp 7.54
170 TestMultiControlPlane/serial/PingHostFromPods 1.54
171 TestMultiControlPlane/serial/AddWorkerNode 38.19
172 TestMultiControlPlane/serial/NodeLabels 0.11
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.73
174 TestMultiControlPlane/serial/CopyFile 18.04
175 TestMultiControlPlane/serial/StopSecondaryNode 12.69
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
177 TestMultiControlPlane/serial/RestartSecondaryNode 21.82
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 16.33
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 196.13
180 TestMultiControlPlane/serial/DeleteSecondaryNode 13.55
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
182 TestMultiControlPlane/serial/StopCluster 35.77
183 TestMultiControlPlane/serial/RestartCluster 123.91
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.56
185 TestMultiControlPlane/serial/AddSecondaryNode 71.83
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.73
190 TestJSONOutput/start/Command 55.69
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.69
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.65
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.78
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.2
215 TestKicCustomNetwork/create_custom_network 39.04
216 TestKicCustomNetwork/use_default_bridge_network 36.56
217 TestKicExistingNetwork 31.95
218 TestKicCustomSubnet 37.24
219 TestKicStaticIP 33.84
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 70.95
224 TestMountStart/serial/StartWithMountFirst 7.07
225 TestMountStart/serial/VerifyMountFirst 0.26
226 TestMountStart/serial/StartWithMountSecond 7.55
227 TestMountStart/serial/VerifyMountSecond 0.25
228 TestMountStart/serial/DeleteFirst 1.6
229 TestMountStart/serial/VerifyMountPostDelete 0.24
230 TestMountStart/serial/Stop 1.19
231 TestMountStart/serial/RestartStopped 8.77
232 TestMountStart/serial/VerifyMountPostStop 0.26
235 TestMultiNode/serial/FreshStart2Nodes 85.04
236 TestMultiNode/serial/DeployApp2Nodes 5.18
237 TestMultiNode/serial/PingHostFrom2Pods 0.95
238 TestMultiNode/serial/AddNode 29.37
239 TestMultiNode/serial/MultiNodeLabels 0.09
240 TestMultiNode/serial/ProfileList 0.36
241 TestMultiNode/serial/CopyFile 9.65
242 TestMultiNode/serial/StopNode 2.21
243 TestMultiNode/serial/StartAfterStop 9.91
244 TestMultiNode/serial/RestartKeepsNodes 88.19
245 TestMultiNode/serial/DeleteNode 5.28
246 TestMultiNode/serial/StopMultiNode 23.83
247 TestMultiNode/serial/RestartMultiNode 55.81
248 TestMultiNode/serial/ValidateNameConflict 35.86
253 TestPreload 137.8
255 TestScheduledStopUnix 105.93
258 TestInsufficientStorage 10.93
259 TestRunningBinaryUpgrade 110.95
261 TestKubernetesUpgrade 397.13
262 TestMissingContainerUpgrade 140.26
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
265 TestNoKubernetes/serial/StartWithK8s 43.64
266 TestNoKubernetes/serial/StartWithStopK8s 7.12
267 TestNoKubernetes/serial/Start 6.69
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
269 TestNoKubernetes/serial/ProfileList 0.91
270 TestNoKubernetes/serial/Stop 1.25
271 TestNoKubernetes/serial/StartNoArgs 7.6
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
273 TestStoppedBinaryUpgrade/Setup 1.07
274 TestStoppedBinaryUpgrade/Upgrade 84.26
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.53
284 TestPause/serial/Start 62.86
285 TestPause/serial/SecondStartNoReconfiguration 40.92
286 TestPause/serial/Pause 0.75
287 TestPause/serial/VerifyStatus 0.34
288 TestPause/serial/Unpause 0.71
289 TestPause/serial/PauseAgain 0.99
290 TestPause/serial/DeletePaused 2.45
291 TestPause/serial/VerifyDeletedResources 0.36
299 TestNetworkPlugins/group/false 4.58
304 TestStartStop/group/old-k8s-version/serial/FirstStart 184.78
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.29
307 TestStartStop/group/old-k8s-version/serial/DeployApp 9.58
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.41
309 TestStartStop/group/old-k8s-version/serial/Stop 12.07
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.39
311 TestStartStop/group/old-k8s-version/serial/SecondStart 142.67
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.45
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.82
314 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.1
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 272.1
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
320 TestStartStop/group/old-k8s-version/serial/Pause 2.91
322 TestStartStop/group/embed-certs/serial/FirstStart 61.54
323 TestStartStop/group/embed-certs/serial/DeployApp 9.38
324 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
325 TestStartStop/group/embed-certs/serial/Stop 12
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
327 TestStartStop/group/embed-certs/serial/SecondStart 288.33
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
329 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
330 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
331 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.01
333 TestStartStop/group/no-preload/serial/FirstStart 68.48
334 TestStartStop/group/no-preload/serial/DeployApp 8.4
335 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
336 TestStartStop/group/no-preload/serial/Stop 11.97
337 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
338 TestStartStop/group/no-preload/serial/SecondStart 266.29
339 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
341 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
342 TestStartStop/group/embed-certs/serial/Pause 2.95
344 TestStartStop/group/newest-cni/serial/FirstStart 38.98
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.38
347 TestStartStop/group/newest-cni/serial/Stop 1.26
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
349 TestStartStop/group/newest-cni/serial/SecondStart 17.6
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
353 TestStartStop/group/newest-cni/serial/Pause 3.44
354 TestNetworkPlugins/group/auto/Start 60.41
355 TestNetworkPlugins/group/auto/KubeletFlags 0.31
356 TestNetworkPlugins/group/auto/NetCatPod 11.35
357 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
358 TestNetworkPlugins/group/auto/DNS 0.19
359 TestNetworkPlugins/group/auto/Localhost 0.16
360 TestNetworkPlugins/group/auto/HairPin 0.15
361 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.16
362 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.4
363 TestStartStop/group/no-preload/serial/Pause 3.84
364 TestNetworkPlugins/group/kindnet/Start 68.96
365 TestNetworkPlugins/group/calico/Start 76.64
366 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.5
368 TestNetworkPlugins/group/kindnet/NetCatPod 10.34
369 TestNetworkPlugins/group/calico/ControllerPod 6.01
370 TestNetworkPlugins/group/kindnet/DNS 0.22
371 TestNetworkPlugins/group/kindnet/Localhost 0.19
372 TestNetworkPlugins/group/kindnet/HairPin 0.16
373 TestNetworkPlugins/group/calico/KubeletFlags 0.27
374 TestNetworkPlugins/group/calico/NetCatPod 15.26
375 TestNetworkPlugins/group/calico/DNS 0.23
376 TestNetworkPlugins/group/calico/Localhost 0.17
377 TestNetworkPlugins/group/calico/HairPin 0.15
378 TestNetworkPlugins/group/custom-flannel/Start 70.59
379 TestNetworkPlugins/group/enable-default-cni/Start 90.85
380 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
381 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.35
382 TestNetworkPlugins/group/custom-flannel/DNS 0.2
383 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
384 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
385 TestNetworkPlugins/group/flannel/Start 65.69
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.4
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.29
391 TestNetworkPlugins/group/bridge/Start 88.63
392 TestNetworkPlugins/group/flannel/ControllerPod 6.01
393 TestNetworkPlugins/group/flannel/KubeletFlags 0.48
394 TestNetworkPlugins/group/flannel/NetCatPod 13.46
395 TestNetworkPlugins/group/flannel/DNS 0.19
396 TestNetworkPlugins/group/flannel/Localhost 0.17
397 TestNetworkPlugins/group/flannel/HairPin 0.16
398 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
399 TestNetworkPlugins/group/bridge/NetCatPod 10.26
400 TestNetworkPlugins/group/bridge/DNS 0.2
401 TestNetworkPlugins/group/bridge/Localhost 0.15
402 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (15.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-439568 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-439568 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.668836611s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-439568
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-439568: exit status 85 (67.913072ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-439568 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |          |
	|         | -p download-only-439568        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:05:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:05:17.783138    7589 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:05:17.783318    7589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:05:17.783332    7589 out.go:304] Setting ErrFile to fd 2...
	I0717 00:05:17.783339    7589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:05:17.783610    7589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	W0717 00:05:17.783767    7589 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19265-2269/.minikube/config/config.json: open /home/jenkins/minikube-integration/19265-2269/.minikube/config/config.json: no such file or directory
	I0717 00:05:17.784175    7589 out.go:298] Setting JSON to true
	I0717 00:05:17.784916    7589 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2870,"bootTime":1721171848,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 00:05:17.784984    7589 start.go:139] virtualization:  
	I0717 00:05:17.788369    7589 out.go:97] [download-only-439568] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0717 00:05:17.788502    7589 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 00:05:17.788545    7589 notify.go:220] Checking for updates...
	I0717 00:05:17.790718    7589 out.go:169] MINIKUBE_LOCATION=19265
	I0717 00:05:17.792723    7589 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:05:17.794618    7589 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	I0717 00:05:17.796535    7589 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	I0717 00:05:17.798547    7589 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 00:05:17.802433    7589 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:05:17.802705    7589 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:05:17.828279    7589 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:05:17.828375    7589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:05:18.205714    7589 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-17 00:05:18.196349931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:05:18.205824    7589 docker.go:307] overlay module found
	I0717 00:05:18.207901    7589 out.go:97] Using the docker driver based on user configuration
	I0717 00:05:18.207930    7589 start.go:297] selected driver: docker
	I0717 00:05:18.207938    7589 start.go:901] validating driver "docker" against <nil>
	I0717 00:05:18.208043    7589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:05:18.261969    7589 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-17 00:05:18.253602454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:05:18.262142    7589 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:05:18.262431    7589 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0717 00:05:18.262599    7589 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:05:18.271941    7589 out.go:169] Using Docker driver with root privileges
	I0717 00:05:18.279407    7589 cni.go:84] Creating CNI manager for ""
	I0717 00:05:18.279431    7589 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:05:18.279444    7589 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:05:18.279538    7589 start.go:340] cluster config:
	{Name:download-only-439568 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-439568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:05:18.287065    7589 out.go:97] Starting "download-only-439568" primary control-plane node in "download-only-439568" cluster
	I0717 00:05:18.287098    7589 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 00:05:18.299573    7589 out.go:97] Pulling base image v0.0.44-1721064868-19249 ...
	I0717 00:05:18.299608    7589 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 00:05:18.299733    7589 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local docker daemon
	I0717 00:05:18.315262    7589 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:05:18.315440    7589 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory
	I0717 00:05:18.315544    7589 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:05:18.356711    7589 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0717 00:05:18.356747    7589 cache.go:56] Caching tarball of preloaded images
	I0717 00:05:18.356913    7589 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 00:05:18.360641    7589 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 00:05:18.360672    7589 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0717 00:05:18.450898    7589 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0717 00:05:21.812632    7589 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c as a tarball
	I0717 00:05:25.306127    7589 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0717 00:05:25.306223    7589 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0717 00:05:26.374187    7589 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 00:05:26.374550    7589 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/download-only-439568/config.json ...
	I0717 00:05:26.374586    7589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/download-only-439568/config.json: {Name:mk3b1ffaf163b41830ef1ef2cb5a2e69845c45ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:26.374796    7589 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 00:05:26.374975    7589 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19265-2269/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-439568 host does not exist
	  To start a cluster, run: "minikube start -p download-only-439568"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-439568
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (7.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-915936 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-915936 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.392988938s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (7.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-915936
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-915936: exit status 85 (67.029168ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-439568 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | -p download-only-439568        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| delete  | -p download-only-439568        | download-only-439568 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| start   | -o=json --download-only        | download-only-915936 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | -p download-only-915936        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:05:33
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:05:33.848607    7790 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:05:33.848724    7790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:05:33.848733    7790 out.go:304] Setting ErrFile to fd 2...
	I0717 00:05:33.848739    7790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:05:33.849000    7790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	I0717 00:05:33.849378    7790 out.go:298] Setting JSON to true
	I0717 00:05:33.850112    7790 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2886,"bootTime":1721171848,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 00:05:33.850178    7790 start.go:139] virtualization:  
	I0717 00:05:33.853372    7790 out.go:97] [download-only-915936] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 00:05:33.853552    7790 notify.go:220] Checking for updates...
	I0717 00:05:33.855713    7790 out.go:169] MINIKUBE_LOCATION=19265
	I0717 00:05:33.857742    7790 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:05:33.859888    7790 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	I0717 00:05:33.862214    7790 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	I0717 00:05:33.864516    7790 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 00:05:33.867910    7790 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:05:33.868188    7790 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:05:33.896289    7790 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:05:33.896393    7790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:05:33.950996    7790 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 00:05:33.941765957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:05:33.951104    7790 docker.go:307] overlay module found
	I0717 00:05:33.953364    7790 out.go:97] Using the docker driver based on user configuration
	I0717 00:05:33.953386    7790 start.go:297] selected driver: docker
	I0717 00:05:33.953393    7790 start.go:901] validating driver "docker" against <nil>
	I0717 00:05:33.953499    7790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:05:34.009445    7790 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 00:05:34.000415395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:05:34.009607    7790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:05:34.009872    7790 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0717 00:05:34.010036    7790 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:05:34.012329    7790 out.go:169] Using Docker driver with root privileges
	I0717 00:05:34.014154    7790 cni.go:84] Creating CNI manager for ""
	I0717 00:05:34.014175    7790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:05:34.014187    7790 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:05:34.014266    7790 start.go:340] cluster config:
	{Name:download-only-915936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-915936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:05:34.016326    7790 out.go:97] Starting "download-only-915936" primary control-plane node in "download-only-915936" cluster
	I0717 00:05:34.016347    7790 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 00:05:34.018362    7790 out.go:97] Pulling base image v0.0.44-1721064868-19249 ...
	I0717 00:05:34.018400    7790 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:05:34.018506    7790 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local docker daemon
	I0717 00:05:34.032541    7790 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:05:34.032673    7790 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory
	I0717 00:05:34.032704    7790 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory, skipping pull
	I0717 00:05:34.032713    7790 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c exists in cache, skipping pull
	I0717 00:05:34.032720    7790 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c as a tarball
	I0717 00:05:34.077438    7790 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0717 00:05:34.077479    7790 cache.go:56] Caching tarball of preloaded images
	I0717 00:05:34.077666    7790 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:05:34.079722    7790 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0717 00:05:34.079748    7790 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 ...
	I0717 00:05:34.159791    7790 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:e4bf0ba8584d1a2d67dbb103edb83dd1 -> /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-915936 host does not exist
	  To start a cluster, run: "minikube start -p download-only-915936"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-915936
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (15.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-740132 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-740132 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.264469338s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (15.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-740132
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-740132: exit status 85 (71.957065ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-439568 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | -p download-only-439568             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| delete  | -p download-only-439568             | download-only-439568 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| start   | -o=json --download-only             | download-only-915936 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | -p download-only-915936             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| delete  | -p download-only-915936             | download-only-915936 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC | 17 Jul 24 00:05 UTC |
	| start   | -o=json --download-only             | download-only-740132 | jenkins | v1.33.1 | 17 Jul 24 00:05 UTC |                     |
	|         | -p download-only-740132             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:05:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:05:41.636872    7994 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:05:41.637023    7994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:05:41.637033    7994 out.go:304] Setting ErrFile to fd 2...
	I0717 00:05:41.637039    7994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:05:41.637293    7994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	I0717 00:05:41.637687    7994 out.go:298] Setting JSON to true
	I0717 00:05:41.638442    7994 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2894,"bootTime":1721171848,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 00:05:41.638514    7994 start.go:139] virtualization:  
	I0717 00:05:41.641085    7994 out.go:97] [download-only-740132] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 00:05:41.641298    7994 notify.go:220] Checking for updates...
	I0717 00:05:41.643155    7994 out.go:169] MINIKUBE_LOCATION=19265
	I0717 00:05:41.645582    7994 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:05:41.647564    7994 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	I0717 00:05:41.648949    7994 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	I0717 00:05:41.650625    7994 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 00:05:41.654456    7994 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:05:41.654713    7994 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:05:41.684362    7994 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:05:41.684447    7994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:05:41.743924    7994 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 00:05:41.733918295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:05:41.744046    7994 docker.go:307] overlay module found
	I0717 00:05:41.746007    7994 out.go:97] Using the docker driver based on user configuration
	I0717 00:05:41.746040    7994 start.go:297] selected driver: docker
	I0717 00:05:41.746049    7994 start.go:901] validating driver "docker" against <nil>
	I0717 00:05:41.746190    7994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:05:41.797092    7994 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 00:05:41.787856786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:05:41.797260    7994 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:05:41.797514    7994 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0717 00:05:41.797688    7994 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:05:41.799836    7994 out.go:169] Using Docker driver with root privileges
	I0717 00:05:41.802030    7994 cni.go:84] Creating CNI manager for ""
	I0717 00:05:41.802050    7994 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:05:41.802060    7994 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:05:41.802140    7994 start.go:340] cluster config:
	{Name:download-only-740132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-740132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:05:41.804275    7994 out.go:97] Starting "download-only-740132" primary control-plane node in "download-only-740132" cluster
	I0717 00:05:41.804296    7994 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 00:05:41.806284    7994 out.go:97] Pulling base image v0.0.44-1721064868-19249 ...
	I0717 00:05:41.806319    7994 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 00:05:41.806359    7994 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local docker daemon
	I0717 00:05:41.820390    7994 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:05:41.820526    7994 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory
	I0717 00:05:41.820544    7994 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory, skipping pull
	I0717 00:05:41.820549    7994 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c exists in cache, skipping pull
	I0717 00:05:41.820557    7994 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c as a tarball
	I0717 00:05:41.865943    7994 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0717 00:05:41.865968    7994 cache.go:56] Caching tarball of preloaded images
	I0717 00:05:41.866122    7994 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 00:05:41.868496    7994 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0717 00:05:41.868529    7994 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0717 00:05:42.024403    7994 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:70b5971c257ae4defe1f5d041a04e29c -> /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0717 00:05:47.759623    7994 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0717 00:05:47.759737    7994 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19265-2269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0717 00:05:48.602320    7994 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 00:05:48.602685    7994 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/download-only-740132/config.json ...
	I0717 00:05:48.602719    7994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/download-only-740132/config.json: {Name:mk2e72ac555863c851acc0648e13a3ac47e16609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:48.602938    7994 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 00:05:48.603088    7994 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19265-2269/.minikube/cache/linux/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-740132 host does not exist
	  To start a cluster, run: "minikube start -p download-only-740132"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-740132
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-257090 --alsologtostderr --binary-mirror http://127.0.0.1:46489 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-257090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-257090
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-579136
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-579136: exit status 85 (64.873739ms)

                                                
                                                
-- stdout --
	* Profile "addons-579136" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-579136"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-579136
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-579136: exit status 85 (64.465572ms)

                                                
                                                
-- stdout --
	* Profile "addons-579136" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-579136"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (175.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-579136 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-579136 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m55.719749645s)
--- PASS: TestAddons/Setup (175.72s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 42.970848ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9j5kz" [98b03ce8-0e8f-459e-860d-ffeebb54febc] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007825618s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qckjd" [7a2c2cd9-4d11-44d7-a720-59b20cb1e5c7] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005025724s
addons_test.go:342: (dbg) Run:  kubectl --context addons-579136 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-579136 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-579136 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.408614595s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 ip
2024/07/17 00:09:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vhr8h" [369bd033-3565-4dc0-ae0d-ae6b411fbb42] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004456105s
addons_test.go:843: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-579136
addons_test.go:843: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-579136: (5.828599778s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 6.75822ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-579136 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-579136 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cfea8382-1761-40ab-a556-229c35c8874b] Pending
helpers_test.go:344: "task-pv-pod" [cfea8382-1761-40ab-a556-229c35c8874b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cfea8382-1761-40ab-a556-229c35c8874b] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.00354769s
addons_test.go:586: (dbg) Run:  kubectl --context addons-579136 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-579136 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-579136 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-579136 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-579136 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-579136 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-579136 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6a070fa3-0b61-4054-bebd-6bc9c6cdb636] Pending
helpers_test.go:344: "task-pv-pod-restore" [6a070fa3-0b61-4054-bebd-6bc9c6cdb636] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6a070fa3-0b61-4054-bebd-6bc9c6cdb636] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003332627s
addons_test.go:628: (dbg) Run:  kubectl --context addons-579136 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-579136 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-579136 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-arm64 -p addons-579136 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.718746201s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-579136 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-fq62p" [22b991c3-9666-4703-8ddc-7c2dee71ee5b] Pending
helpers_test.go:344: "headlamp-7867546754-fq62p" [22b991c3-9666-4703-8ddc-7c2dee71ee5b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-fq62p" [22b991c3-9666-4703-8ddc-7c2dee71ee5b] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003452933s
--- PASS: TestAddons/parallel/Headlamp (13.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-g8vj4" [55cfdf0e-539c-49ad-8e38-4cc03aa67448] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004021975s
addons_test.go:862: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-579136
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.64s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-579136 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-579136 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579136 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bc7208ce-020e-4ae9-bd8a-9ce8c12735ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bc7208ce-020e-4ae9-bd8a-9ce8c12735ea] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bc7208ce-020e-4ae9-bd8a-9ce8c12735ea] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004428937s
addons_test.go:992: (dbg) Run:  kubectl --context addons-579136 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 ssh "cat /opt/local-path-provisioner/pvc-79ea8134-8c2c-4ed1-b98e-1d5f361ebf2b_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-579136 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-579136 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-arm64 -p addons-579136 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-arm64 -p addons-579136 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.311018429s)
--- PASS: TestAddons/parallel/LocalPath (53.64s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-r42hf" [3055b027-1414-4787-9a8e-0a95e312c842] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004630205s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-579136
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-r64g4" [3095ab3e-167f-4472-92c6-f22e3f464e80] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003502777s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-579136 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-579136 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.14s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-579136
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-579136: (11.887600338s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-579136
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-579136
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-579136
--- PASS: TestAddons/StoppedEnableDisable (12.14s)

                                                
                                    
x
+
TestCertOptions (39.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-589242 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-589242 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.598972928s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-589242 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-589242 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-589242 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-589242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-589242
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-589242: (1.976767023s)
--- PASS: TestCertOptions (39.22s)

                                                
                                    
x
+
TestCertExpiration (237.61s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-282532 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0717 01:02:35.197178    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-282532 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.209545995s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-282532 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-282532 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.887226739s)
helpers_test.go:175: Cleaning up "cert-expiration-282532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-282532
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-282532: (2.51528686s)
--- PASS: TestCertExpiration (237.61s)

                                                
                                    
x
+
TestForceSystemdFlag (39.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-905770 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-905770 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.601105253s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-905770 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-905770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-905770
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-905770: (2.925329949s)
--- PASS: TestForceSystemdFlag (39.88s)

                                                
                                    
x
+
TestForceSystemdEnv (44.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-317703 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-317703 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.058861688s)
helpers_test.go:175: Cleaning up "force-systemd-env-317703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-317703
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-317703: (2.734105371s)
--- PASS: TestForceSystemdEnv (44.79s)

                                                
                                    
x
+
TestErrorSpam/setup (34.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-918997 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-918997 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-918997 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-918997 --driver=docker  --container-runtime=crio: (34.005849451s)
--- PASS: TestErrorSpam/setup (34.01s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 pause
--- PASS: TestErrorSpam/pause (1.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 stop: (1.216288878s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-918997 --log_dir /tmp/nospam-918997 stop
--- PASS: TestErrorSpam/stop (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19265-2269/.minikube/files/etc/test/nested/copy/7584/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.75s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915248 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-915248 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m0.747613672s)
--- PASS: TestFunctional/serial/StartWithProxy (60.75s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915248 --alsologtostderr -v=8
E0717 00:18:54.518956    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:18:54.526319    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:18:54.536565    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:18:54.556854    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:18:54.597198    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:18:54.677529    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:18:54.837921    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:18:55.158471    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:18:55.799435    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:18:57.079966    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:18:59.640227    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:19:04.761046    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:19:15.001970    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-915248 --alsologtostderr -v=8: (36.52846183s)
functional_test.go:659: soft start took 36.532542797s for "functional-915248" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-915248 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-915248 cache add registry.k8s.io/pause:3.1: (1.307962255s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-915248 cache add registry.k8s.io/pause:3.3: (1.30368527s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-915248 cache add registry.k8s.io/pause:latest: (1.280247301s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-915248 /tmp/TestFunctionalserialCacheCmdcacheadd_local852678862/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 cache add minikube-local-cache-test:functional-915248
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 cache delete minikube-local-cache-test:functional-915248
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-915248
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915248 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (336.524483ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-915248 cache reload: (1.107220249s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 kubectl -- --context functional-915248 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-915248 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (294.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915248 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 00:19:35.482345    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:20:16.443774    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:21:38.364905    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:23:54.517943    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-915248 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m54.438822558s)
functional_test.go:757: restart took 4m54.438925462s for "functional-915248" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (294.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-915248 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-915248 logs: (1.693497631s)
--- PASS: TestFunctional/serial/LogsCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 logs --file /tmp/TestFunctionalserialLogsFileCmd3347040058/001/logs.txt
E0717 00:24:22.205101    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-915248 logs --file /tmp/TestFunctionalserialLogsFileCmd3347040058/001/logs.txt: (1.741979438s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.85s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-915248 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-915248
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-915248: exit status 115 (764.22699ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32483 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-915248 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.85s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915248 config get cpus: exit status 14 (95.407648ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915248 config get cpus: exit status 14 (94.660885ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-915248 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-915248 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 47744: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915248 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-915248 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (175.781776ms)

                                                
                                                
-- stdout --
	* [functional-915248] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:25:09.148757   47280 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:25:09.148910   47280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:25:09.148922   47280 out.go:304] Setting ErrFile to fd 2...
	I0717 00:25:09.148927   47280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:25:09.149169   47280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	I0717 00:25:09.149518   47280 out.go:298] Setting JSON to false
	I0717 00:25:09.150423   47280 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4062,"bootTime":1721171848,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 00:25:09.150493   47280 start.go:139] virtualization:  
	I0717 00:25:09.152946   47280 out.go:177] * [functional-915248] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 00:25:09.155065   47280 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:25:09.155135   47280 notify.go:220] Checking for updates...
	I0717 00:25:09.158899   47280 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:25:09.160637   47280 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	I0717 00:25:09.162378   47280 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	I0717 00:25:09.164104   47280 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 00:25:09.165763   47280 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:25:09.167815   47280 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:25:09.168444   47280 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:25:09.192243   47280 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:25:09.192356   47280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:25:09.260422   47280 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-17 00:25:09.250455332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:25:09.260531   47280 docker.go:307] overlay module found
	I0717 00:25:09.262364   47280 out.go:177] * Using the docker driver based on existing profile
	I0717 00:25:09.264197   47280 start.go:297] selected driver: docker
	I0717 00:25:09.264218   47280 start.go:901] validating driver "docker" against &{Name:functional-915248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-915248 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:25:09.264337   47280 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:25:09.266443   47280 out.go:177] 
	W0717 00:25:09.268113   47280 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 00:25:09.270023   47280 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915248 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915248 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-915248 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (199.92539ms)

                                                
                                                
-- stdout --
	* [functional-915248] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:25:09.543910   47394 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:25:09.544097   47394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:25:09.544109   47394 out.go:304] Setting ErrFile to fd 2...
	I0717 00:25:09.544132   47394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:25:09.544522   47394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	I0717 00:25:09.544955   47394 out.go:298] Setting JSON to false
	I0717 00:25:09.545995   47394 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4062,"bootTime":1721171848,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 00:25:09.546079   47394 start.go:139] virtualization:  
	I0717 00:25:09.548254   47394 out.go:177] * [functional-915248] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0717 00:25:09.550367   47394 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:25:09.550432   47394 notify.go:220] Checking for updates...
	I0717 00:25:09.554163   47394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:25:09.556333   47394 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	I0717 00:25:09.558139   47394 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	I0717 00:25:09.560217   47394 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 00:25:09.561830   47394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:25:09.564272   47394 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:25:09.564939   47394 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:25:09.592314   47394 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:25:09.592434   47394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:25:09.677260   47394 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-17 00:25:09.66756367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:25:09.677369   47394 docker.go:307] overlay module found
	I0717 00:25:09.679471   47394 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 00:25:09.681096   47394 start.go:297] selected driver: docker
	I0717 00:25:09.681115   47394 start.go:901] validating driver "docker" against &{Name:functional-915248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-915248 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:25:09.681218   47394 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:25:09.683450   47394 out.go:177] 
	W0717 00:25:09.685073   47394 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 00:25:09.686920   47394 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-915248 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-915248 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-xl2rd" [195f048a-423e-4f2c-ad82-5ca1bf17b97e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-xl2rd" [195f048a-423e-4f2c-ad82-5ca1bf17b97e] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003636657s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30451
functional_test.go:1671: http://192.168.49.2:30451: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-xl2rd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30451
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [aff1d2ee-3d32-4724-8eb6-1b994308c76e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004133815s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-915248 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-915248 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-915248 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-915248 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d3763290-083c-456f-b8bd-b719c32459a1] Pending
helpers_test.go:344: "sp-pod" [d3763290-083c-456f-b8bd-b719c32459a1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d3763290-083c-456f-b8bd-b719c32459a1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003498766s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-915248 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-915248 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-915248 delete -f testdata/storage-provisioner/pod.yaml: (1.063311904s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-915248 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d698c212-8692-41f5-969a-fbc566374d2e] Pending
helpers_test.go:344: "sp-pod" [d698c212-8692-41f5-969a-fbc566374d2e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.006588478s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-915248 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh -n functional-915248 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 cp functional-915248:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2647977460/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh -n functional-915248 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh -n functional-915248 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7584/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "sudo cat /etc/test/nested/copy/7584/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7584.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "sudo cat /etc/ssl/certs/7584.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7584.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "sudo cat /usr/share/ca-certificates/7584.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/75842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "sudo cat /etc/ssl/certs/75842.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/75842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "sudo cat /usr/share/ca-certificates/75842.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-915248 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915248 ssh "sudo systemctl is-active docker": exit status 1 (315.044059ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915248 ssh "sudo systemctl is-active containerd": exit status 1 (311.774327ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-915248 version -o=json --components: (1.169918066s)
--- PASS: TestFunctional/parallel/Version/components (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-915248 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kindest/kindnetd:v20240513-cd2ac642
docker.io/kicbase/echo-server:functional-915248
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-915248 image ls --format short --alsologtostderr:
I0717 00:25:12.009454   47866 out.go:291] Setting OutFile to fd 1 ...
I0717 00:25:12.009590   47866 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:25:12.009601   47866 out.go:304] Setting ErrFile to fd 2...
I0717 00:25:12.009607   47866 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:25:12.009872   47866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
I0717 00:25:12.010493   47866 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:25:12.010606   47866 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:25:12.011124   47866 cli_runner.go:164] Run: docker container inspect functional-915248 --format={{.State.Status}}
I0717 00:25:12.028280   47866 ssh_runner.go:195] Run: systemctl --version
I0717 00:25:12.028342   47866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-915248
I0717 00:25:12.045327   47866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/functional-915248/id_rsa Username:docker}
I0717 00:25:12.139589   47866 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-915248 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5e32961ddcea3 | 90.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/my-image                      | functional-915248  | 1a4d561105c46 | 1.64MB |
| registry.k8s.io/kube-proxy              | v1.30.2            | 66dbb96a9149f | 89.2MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e1dcc3400d3ea | 108MB  |
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | 89d73d416b992 | 62MB   |
| docker.io/library/nginx                 | alpine             | 5461b18aaccf3 | 46.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 84c601f3f72c8 | 114MB  |
| registry.k8s.io/kube-scheduler          | v1.30.2            | c7dd04b1bafeb | 61.6MB |
| docker.io/kicbase/echo-server           | functional-915248  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | latest             | 443d199e8bfcc | 197MB  |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-915248 image ls --format table --alsologtostderr:
I0717 00:25:16.264566   48264 out.go:291] Setting OutFile to fd 1 ...
I0717 00:25:16.264781   48264 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:25:16.264809   48264 out.go:304] Setting ErrFile to fd 2...
I0717 00:25:16.264831   48264 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:25:16.265128   48264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
I0717 00:25:16.265800   48264 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:25:16.265964   48264 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:25:16.266480   48264 cli_runner.go:164] Run: docker container inspect functional-915248 --format={{.State.Status}}
I0717 00:25:16.296040   48264 ssh_runner.go:195] Run: systemctl --version
I0717 00:25:16.296105   48264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-915248
I0717 00:25:16.324997   48264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/functional-915248/id_rsa Username:docker}
I0717 00:25:16.444933   48264 ssh_runner.go:195] Run: sudo crictl images --output json
2024/07/17 00:25:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-915248 image ls --format json --alsologtostderr:
[{"id":"66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae","repoDigests":["registry.k8s.io/kube-proxy@sha256:7df12f2b1bad9a90a39a1ca558501a4ba66b8943df1d5f2438788aa15c9d23ef","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"89199511"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab5
77b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:functional-915248"],"size":"4788229"},{"id":"1a4d561105c46388024d63ebb826136c490210dcfaac3d0ac9e4f4dea2f29a56","repoDigests":["localhost/my-image@sha256:3a11455518665a6109bc388dc1b8703c45739c78ab0d98a1cfe9486674c80402"],"repoTags":["localhost/my-image:functional-915248"],"size":"1640226"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d","registry.k8s.io/kube-apiserver@sha256:74ea4e3a814490ffe1a66434837aea1e73006d559b65a6321f3e41fc105845b7"
],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"113538528"},{"id":"c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:96a3e2d1761583447d4ae302128b4956b855d14cdd5bf9ed4637d8b9f0c74a27"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"61568326"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"949e88e43488f7daf03c86406f85aa8ede2884c784a22f38b6267aaba7aebfeb","repoDigests":["docker.io/library/e2fe980b373ac63a42868f6081a5aaab01f1ec8b3f541958db284c8026271460-tmp@sha256:032765f262abc879367fb703e708091528bb406c76f921e5117d54
c901eb02fe"],"repoTags":[],"size":"1637644"},{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":["docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55","docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62"],"repoTags":["docker.io/library/nginx:alpine"],"size":"46671377"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:8ddc81caccc97ada7e3c53ebe2c03240f25
cd123c479752a1c314c402b972028"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"108229958"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df","docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e"],"repoTags":["docker.io/library/nginx:latest"],"size":"197104786"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["r
egistry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb6
9d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40","repoDigests":["docker.io/kindest/kindnetd@sha256:1770ac17c925dfef54061d598c65310ff99269a3a77d5c7257f04366b38c64be","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"62007858"},{"id":"5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493","docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb0
4957ab9a32153bc36738760b6975879ada987"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"90278450"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-915248 image ls --format json --alsologtostderr:
I0717 00:25:15.911236   48233 out.go:291] Setting OutFile to fd 1 ...
I0717 00:25:15.911499   48233 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:25:15.911535   48233 out.go:304] Setting ErrFile to fd 2...
I0717 00:25:15.911557   48233 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:25:15.911817   48233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
I0717 00:25:15.912640   48233 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:25:15.912839   48233 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:25:15.913507   48233 cli_runner.go:164] Run: docker container inspect functional-915248 --format={{.State.Status}}
I0717 00:25:15.964809   48233 ssh_runner.go:195] Run: systemctl --version
I0717 00:25:15.964860   48233 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-915248
I0717 00:25:16.017280   48233 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/functional-915248/id_rsa Username:docker}
I0717 00:25:16.129527   48233 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-915248 image ls --format yaml --alsologtostderr:
- id: 84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
- registry.k8s.io/kube-apiserver@sha256:74ea4e3a814490ffe1a66434837aea1e73006d559b65a6321f3e41fc105845b7
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "113538528"
- id: e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:8ddc81caccc97ada7e3c53ebe2c03240f25cd123c479752a1c314c402b972028
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "108229958"
- id: 89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40
repoDigests:
- docker.io/kindest/kindnetd@sha256:1770ac17c925dfef54061d598c65310ff99269a3a77d5c7257f04366b38c64be
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "62007858"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7df12f2b1bad9a90a39a1ca558501a4ba66b8943df1d5f2438788aa15c9d23ef
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "89199511"
- id: 443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e
repoTags:
- docker.io/library/nginx:latest
size: "197104786"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:96a3e2d1761583447d4ae302128b4956b855d14cdd5bf9ed4637d8b9f0c74a27
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "61568326"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:functional-915248
size: "4788229"
- id: 5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
- docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "90278450"
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests:
- docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55
- docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62
repoTags:
- docker.io/library/nginx:alpine
size: "46671377"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-915248 image ls --format yaml --alsologtostderr:
I0717 00:25:12.245829   47897 out.go:291] Setting OutFile to fd 1 ...
I0717 00:25:12.246136   47897 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:25:12.246224   47897 out.go:304] Setting ErrFile to fd 2...
I0717 00:25:12.246249   47897 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:25:12.246522   47897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
I0717 00:25:12.247276   47897 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:25:12.247462   47897 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:25:12.248074   47897 cli_runner.go:164] Run: docker container inspect functional-915248 --format={{.State.Status}}
I0717 00:25:12.266393   47897 ssh_runner.go:195] Run: systemctl --version
I0717 00:25:12.266452   47897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-915248
I0717 00:25:12.285769   47897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/functional-915248/id_rsa Username:docker}
I0717 00:25:12.379458   47897 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915248 ssh pgrep buildkitd: exit status 1 (358.969511ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image build -t localhost/my-image:functional-915248 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-915248 image build -t localhost/my-image:functional-915248 testdata/build --alsologtostderr: (2.700631872s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-915248 image build -t localhost/my-image:functional-915248 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 949e88e4348
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-915248
--> 1a4d561105c
Successfully tagged localhost/my-image:functional-915248
1a4d561105c46388024d63ebb826136c490210dcfaac3d0ac9e4f4dea2f29a56
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-915248 image build -t localhost/my-image:functional-915248 testdata/build --alsologtostderr:
I0717 00:25:12.885871   47981 out.go:291] Setting OutFile to fd 1 ...
I0717 00:25:12.886096   47981 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:25:12.886128   47981 out.go:304] Setting ErrFile to fd 2...
I0717 00:25:12.886167   47981 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:25:12.886542   47981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
I0717 00:25:12.890887   47981 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:25:12.893488   47981 config.go:182] Loaded profile config "functional-915248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:25:12.894190   47981 cli_runner.go:164] Run: docker container inspect functional-915248 --format={{.State.Status}}
I0717 00:25:12.918551   47981 ssh_runner.go:195] Run: systemctl --version
I0717 00:25:12.918605   47981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-915248
I0717 00:25:12.937225   47981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/functional-915248/id_rsa Username:docker}
I0717 00:25:13.059211   47981 build_images.go:161] Building image from path: /tmp/build.2420400884.tar
I0717 00:25:13.059308   47981 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 00:25:13.089548   47981 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2420400884.tar
I0717 00:25:13.094094   47981 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2420400884.tar: stat -c "%s %y" /var/lib/minikube/build/build.2420400884.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2420400884.tar': No such file or directory
I0717 00:25:13.094123   47981 ssh_runner.go:362] scp /tmp/build.2420400884.tar --> /var/lib/minikube/build/build.2420400884.tar (3072 bytes)
I0717 00:25:13.122678   47981 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2420400884
I0717 00:25:13.132653   47981 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2420400884 -xf /var/lib/minikube/build/build.2420400884.tar
I0717 00:25:13.142700   47981 crio.go:315] Building image: /var/lib/minikube/build/build.2420400884
I0717 00:25:13.142975   47981 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-915248 /var/lib/minikube/build/build.2420400884 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0717 00:25:15.458811   47981 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-915248 /var/lib/minikube/build/build.2420400884 --cgroup-manager=cgroupfs: (2.315799635s)
I0717 00:25:15.458899   47981 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2420400884
I0717 00:25:15.474064   47981 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2420400884.tar
I0717 00:25:15.494025   47981 build_images.go:217] Built localhost/my-image:functional-915248 from /tmp/build.2420400884.tar
I0717 00:25:15.494117   47981 build_images.go:133] succeeded building to: functional-915248
I0717 00:25:15.494126   47981 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-915248
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image load --daemon docker.io/kicbase/echo-server:functional-915248 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-915248 image load --daemon docker.io/kicbase/echo-server:functional-915248 --alsologtostderr: (1.425486196s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image load --daemon docker.io/kicbase/echo-server:functional-915248 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-915248 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-915248 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-bd6w5" [30d10224-fc84-4f56-a5e6-0ec7d8af0958] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-bd6w5" [30d10224-fc84-4f56-a5e6-0ec7d8af0958] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00427164s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-915248
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image load --daemon docker.io/kicbase/echo-server:functional-915248 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image save docker.io/kicbase/echo-server:functional-915248 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-915248 image save docker.io/kicbase/echo-server:functional-915248 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr: (1.892830932s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image rm docker.io/kicbase/echo-server:functional-915248 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-915248
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 image save --daemon docker.io/kicbase/echo-server:functional-915248 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-915248
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-915248 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-915248 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-915248 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-915248 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 44314: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-915248 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-915248 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7194bdf0-00e1-40eb-b130-a084af57efd7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7194bdf0-00e1-40eb-b130-a084af57efd7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003922612s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 service list -o json
functional_test.go:1490: Took "360.184886ms" to run "out/minikube-linux-arm64 -p functional-915248 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32144
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32144
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-915248 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.106.75 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-915248 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "336.28666ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "55.00365ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "319.415482ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "56.190777ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-915248 /tmp/TestFunctionalparallelMountCmdany-port637630247/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721175898316301248" to /tmp/TestFunctionalparallelMountCmdany-port637630247/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721175898316301248" to /tmp/TestFunctionalparallelMountCmdany-port637630247/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721175898316301248" to /tmp/TestFunctionalparallelMountCmdany-port637630247/001/test-1721175898316301248
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915248 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.570697ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 00:24 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 00:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 00:24 test-1721175898316301248
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh cat /mount-9p/test-1721175898316301248
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-915248 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [af196dd7-2036-429b-9312-79a297e545a8] Pending
helpers_test.go:344: "busybox-mount" [af196dd7-2036-429b-9312-79a297e545a8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [af196dd7-2036-429b-9312-79a297e545a8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [af196dd7-2036-429b-9312-79a297e545a8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003916615s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-915248 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915248 /tmp/TestFunctionalparallelMountCmdany-port637630247/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-915248 /tmp/TestFunctionalparallelMountCmdspecific-port2596857131/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915248 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (318.205388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915248 /tmp/TestFunctionalparallelMountCmdspecific-port2596857131/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915248 ssh "sudo umount -f /mount-9p": exit status 1 (259.047653ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-915248 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915248 /tmp/TestFunctionalparallelMountCmdspecific-port2596857131/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-915248 /tmp/TestFunctionalparallelMountCmdVerifyCleanup539897410/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-915248 /tmp/TestFunctionalparallelMountCmdVerifyCleanup539897410/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-915248 /tmp/TestFunctionalparallelMountCmdVerifyCleanup539897410/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915248 ssh "findmnt -T" /mount1: exit status 1 (490.349786ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-915248 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-915248 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915248 /tmp/TestFunctionalparallelMountCmdVerifyCleanup539897410/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915248 /tmp/TestFunctionalparallelMountCmdVerifyCleanup539897410/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915248 /tmp/TestFunctionalparallelMountCmdVerifyCleanup539897410/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-915248
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-915248
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-915248
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (182.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-601338 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-601338 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m1.633808806s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (182.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-601338 -- rollout status deployment/busybox: (4.558951034s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-cnl9h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-j2ckl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-qrb7q -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-cnl9h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-j2ckl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-qrb7q -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-cnl9h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-j2ckl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-qrb7q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-cnl9h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-cnl9h -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-j2ckl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-j2ckl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-qrb7q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-601338 -- exec busybox-fc5497c4f-qrb7q -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (38.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-601338 -v=7 --alsologtostderr
E0717 00:28:54.517927    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-601338 -v=7 --alsologtostderr: (37.23194365s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (38.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-601338 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp testdata/cp-test.txt ha-601338:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2861524312/001/cp-test_ha-601338.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338:/home/docker/cp-test.txt ha-601338-m02:/home/docker/cp-test_ha-601338_ha-601338-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m02 "sudo cat /home/docker/cp-test_ha-601338_ha-601338-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338:/home/docker/cp-test.txt ha-601338-m03:/home/docker/cp-test_ha-601338_ha-601338-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m03 "sudo cat /home/docker/cp-test_ha-601338_ha-601338-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338:/home/docker/cp-test.txt ha-601338-m04:/home/docker/cp-test_ha-601338_ha-601338-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m04 "sudo cat /home/docker/cp-test_ha-601338_ha-601338-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp testdata/cp-test.txt ha-601338-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2861524312/001/cp-test_ha-601338-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m02:/home/docker/cp-test.txt ha-601338:/home/docker/cp-test_ha-601338-m02_ha-601338.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338 "sudo cat /home/docker/cp-test_ha-601338-m02_ha-601338.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m02:/home/docker/cp-test.txt ha-601338-m03:/home/docker/cp-test_ha-601338-m02_ha-601338-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m03 "sudo cat /home/docker/cp-test_ha-601338-m02_ha-601338-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m02:/home/docker/cp-test.txt ha-601338-m04:/home/docker/cp-test_ha-601338-m02_ha-601338-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m04 "sudo cat /home/docker/cp-test_ha-601338-m02_ha-601338-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp testdata/cp-test.txt ha-601338-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2861524312/001/cp-test_ha-601338-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m03:/home/docker/cp-test.txt ha-601338:/home/docker/cp-test_ha-601338-m03_ha-601338.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338 "sudo cat /home/docker/cp-test_ha-601338-m03_ha-601338.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m03:/home/docker/cp-test.txt ha-601338-m02:/home/docker/cp-test_ha-601338-m03_ha-601338-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m02 "sudo cat /home/docker/cp-test_ha-601338-m03_ha-601338-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m03:/home/docker/cp-test.txt ha-601338-m04:/home/docker/cp-test_ha-601338-m03_ha-601338-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m04 "sudo cat /home/docker/cp-test_ha-601338-m03_ha-601338-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp testdata/cp-test.txt ha-601338-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2861524312/001/cp-test_ha-601338-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m04:/home/docker/cp-test.txt ha-601338:/home/docker/cp-test_ha-601338-m04_ha-601338.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m04 "sudo cat /home/docker/cp-test.txt"
E0717 00:29:32.153023    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
E0717 00:29:32.158274    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
E0717 00:29:32.168626    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
E0717 00:29:32.189255    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
E0717 00:29:32.229486    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
E0717 00:29:32.309808    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338 "sudo cat /home/docker/cp-test_ha-601338-m04_ha-601338.txt"
E0717 00:29:32.470951    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m04:/home/docker/cp-test.txt ha-601338-m02:/home/docker/cp-test_ha-601338-m04_ha-601338-m02.txt
E0717 00:29:32.792065    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m02 "sudo cat /home/docker/cp-test_ha-601338-m04_ha-601338-m02.txt"
E0717 00:29:33.432359    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 cp ha-601338-m04:/home/docker/cp-test.txt ha-601338-m03:/home/docker/cp-test_ha-601338-m04_ha-601338-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 ssh -n ha-601338-m03 "sudo cat /home/docker/cp-test_ha-601338-m04_ha-601338-m03.txt"
E0717 00:29:34.713350    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/CopyFile (18.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 node stop m02 -v=7 --alsologtostderr
E0717 00:29:37.273558    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
E0717 00:29:42.394281    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-601338 node stop m02 -v=7 --alsologtostderr: (11.974888366s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr: exit status 7 (716.216061ms)

                                                
                                                
-- stdout --
	ha-601338
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-601338-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-601338-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-601338-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:29:46.751149   64145 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:29:46.751265   64145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:29:46.751270   64145 out.go:304] Setting ErrFile to fd 2...
	I0717 00:29:46.751275   64145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:29:46.751567   64145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	I0717 00:29:46.751749   64145 out.go:298] Setting JSON to false
	I0717 00:29:46.751766   64145 mustload.go:65] Loading cluster: ha-601338
	I0717 00:29:46.752160   64145 config.go:182] Loaded profile config "ha-601338": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:29:46.752172   64145 status.go:255] checking status of ha-601338 ...
	I0717 00:29:46.752755   64145 cli_runner.go:164] Run: docker container inspect ha-601338 --format={{.State.Status}}
	I0717 00:29:46.753226   64145 notify.go:220] Checking for updates...
	I0717 00:29:46.771074   64145 status.go:330] ha-601338 host status = "Running" (err=<nil>)
	I0717 00:29:46.771100   64145 host.go:66] Checking if "ha-601338" exists ...
	I0717 00:29:46.771398   64145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-601338
	I0717 00:29:46.798043   64145 host.go:66] Checking if "ha-601338" exists ...
	I0717 00:29:46.798431   64145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:29:46.798515   64145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-601338
	I0717 00:29:46.829839   64145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/ha-601338/id_rsa Username:docker}
	I0717 00:29:46.920073   64145 ssh_runner.go:195] Run: systemctl --version
	I0717 00:29:46.924387   64145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:29:46.936972   64145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:29:46.999353   64145 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-07-17 00:29:46.989320175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:29:46.999944   64145 kubeconfig.go:125] found "ha-601338" server: "https://192.168.49.254:8443"
	I0717 00:29:47.000060   64145 api_server.go:166] Checking apiserver status ...
	I0717 00:29:47.000127   64145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:29:47.010757   64145 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	I0717 00:29:47.021402   64145 api_server.go:182] apiserver freezer: "7:freezer:/docker/2a7eda297e329483ab8efef29a931c4f9b6fa9c559364004c8f24793cb54116f/crio/crio-1fb4e83d28fb4573f0495befdd177694417ff095d055711dd83a62ef2a048564"
	I0717 00:29:47.021472   64145 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2a7eda297e329483ab8efef29a931c4f9b6fa9c559364004c8f24793cb54116f/crio/crio-1fb4e83d28fb4573f0495befdd177694417ff095d055711dd83a62ef2a048564/freezer.state
	I0717 00:29:47.030490   64145 api_server.go:204] freezer state: "THAWED"
	I0717 00:29:47.030517   64145 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0717 00:29:47.038343   64145 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0717 00:29:47.038372   64145 status.go:422] ha-601338 apiserver status = Running (err=<nil>)
	I0717 00:29:47.038383   64145 status.go:257] ha-601338 status: &{Name:ha-601338 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:29:47.038401   64145 status.go:255] checking status of ha-601338-m02 ...
	I0717 00:29:47.038705   64145 cli_runner.go:164] Run: docker container inspect ha-601338-m02 --format={{.State.Status}}
	I0717 00:29:47.055569   64145 status.go:330] ha-601338-m02 host status = "Stopped" (err=<nil>)
	I0717 00:29:47.055591   64145 status.go:343] host is not running, skipping remaining checks
	I0717 00:29:47.055613   64145 status.go:257] ha-601338-m02 status: &{Name:ha-601338-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:29:47.055633   64145 status.go:255] checking status of ha-601338-m03 ...
	I0717 00:29:47.055936   64145 cli_runner.go:164] Run: docker container inspect ha-601338-m03 --format={{.State.Status}}
	I0717 00:29:47.074004   64145 status.go:330] ha-601338-m03 host status = "Running" (err=<nil>)
	I0717 00:29:47.074038   64145 host.go:66] Checking if "ha-601338-m03" exists ...
	I0717 00:29:47.074410   64145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-601338-m03
	I0717 00:29:47.091912   64145 host.go:66] Checking if "ha-601338-m03" exists ...
	I0717 00:29:47.092221   64145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:29:47.092270   64145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-601338-m03
	I0717 00:29:47.109090   64145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/ha-601338-m03/id_rsa Username:docker}
	I0717 00:29:47.200461   64145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:29:47.212347   64145 kubeconfig.go:125] found "ha-601338" server: "https://192.168.49.254:8443"
	I0717 00:29:47.212377   64145 api_server.go:166] Checking apiserver status ...
	I0717 00:29:47.212439   64145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:29:47.223273   64145 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1362/cgroup
	I0717 00:29:47.232489   64145 api_server.go:182] apiserver freezer: "7:freezer:/docker/2c6d53f217bdb4f7969482ecb334a15dfb96f7a083048150656bc5dff5d72538/crio/crio-5b399c863e5c80a7fbf0f2e8e52a8654a1d81ab70765bb43ab85cb88e43681c3"
	I0717 00:29:47.232585   64145 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2c6d53f217bdb4f7969482ecb334a15dfb96f7a083048150656bc5dff5d72538/crio/crio-5b399c863e5c80a7fbf0f2e8e52a8654a1d81ab70765bb43ab85cb88e43681c3/freezer.state
	I0717 00:29:47.241534   64145 api_server.go:204] freezer state: "THAWED"
	I0717 00:29:47.241562   64145 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0717 00:29:47.249210   64145 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0717 00:29:47.249234   64145 status.go:422] ha-601338-m03 apiserver status = Running (err=<nil>)
	I0717 00:29:47.249262   64145 status.go:257] ha-601338-m03 status: &{Name:ha-601338-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:29:47.249286   64145 status.go:255] checking status of ha-601338-m04 ...
	I0717 00:29:47.249584   64145 cli_runner.go:164] Run: docker container inspect ha-601338-m04 --format={{.State.Status}}
	I0717 00:29:47.267204   64145 status.go:330] ha-601338-m04 host status = "Running" (err=<nil>)
	I0717 00:29:47.267230   64145 host.go:66] Checking if "ha-601338-m04" exists ...
	I0717 00:29:47.267546   64145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-601338-m04
	I0717 00:29:47.286290   64145 host.go:66] Checking if "ha-601338-m04" exists ...
	I0717 00:29:47.286590   64145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:29:47.286639   64145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-601338-m04
	I0717 00:29:47.306202   64145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/ha-601338-m04/id_rsa Username:docker}
	I0717 00:29:47.399996   64145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:29:47.415325   64145 status.go:257] ha-601338-m04 status: &{Name:ha-601338-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 node start m02 -v=7 --alsologtostderr
E0717 00:29:52.634473    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-601338 node start m02 -v=7 --alsologtostderr: (20.350146975s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr: (1.353957s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0717 00:30:13.115195    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (16.328374202s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (196.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-601338 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-601338 -v=7 --alsologtostderr
E0717 00:30:54.075418    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-601338 -v=7 --alsologtostderr: (36.953798183s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-601338 --wait=true -v=7 --alsologtostderr
E0717 00:32:15.995659    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-601338 --wait=true -v=7 --alsologtostderr: (2m39.02626826s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-601338
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (196.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 node delete m03 -v=7 --alsologtostderr
E0717 00:33:54.518554    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-601338 node delete m03 -v=7 --alsologtostderr: (12.668804374s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-601338 stop -v=7 --alsologtostderr: (35.655802837s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr: exit status 7 (113.293838ms)

                                                
                                                
-- stdout --
	ha-601338
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-601338-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-601338-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:34:31.993050   78696 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:34:31.993200   78696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:34:31.993210   78696 out.go:304] Setting ErrFile to fd 2...
	I0717 00:34:31.993215   78696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:34:31.993471   78696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	I0717 00:34:31.993654   78696 out.go:298] Setting JSON to false
	I0717 00:34:31.993688   78696 mustload.go:65] Loading cluster: ha-601338
	I0717 00:34:31.993778   78696 notify.go:220] Checking for updates...
	I0717 00:34:31.994088   78696 config.go:182] Loaded profile config "ha-601338": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:34:31.994106   78696 status.go:255] checking status of ha-601338 ...
	I0717 00:34:31.994595   78696 cli_runner.go:164] Run: docker container inspect ha-601338 --format={{.State.Status}}
	I0717 00:34:32.012375   78696 status.go:330] ha-601338 host status = "Stopped" (err=<nil>)
	I0717 00:34:32.012399   78696 status.go:343] host is not running, skipping remaining checks
	I0717 00:34:32.012406   78696 status.go:257] ha-601338 status: &{Name:ha-601338 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:34:32.012450   78696 status.go:255] checking status of ha-601338-m02 ...
	I0717 00:34:32.012754   78696 cli_runner.go:164] Run: docker container inspect ha-601338-m02 --format={{.State.Status}}
	I0717 00:34:32.041638   78696 status.go:330] ha-601338-m02 host status = "Stopped" (err=<nil>)
	I0717 00:34:32.041660   78696 status.go:343] host is not running, skipping remaining checks
	I0717 00:34:32.041667   78696 status.go:257] ha-601338-m02 status: &{Name:ha-601338-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:34:32.041685   78696 status.go:255] checking status of ha-601338-m04 ...
	I0717 00:34:32.042001   78696 cli_runner.go:164] Run: docker container inspect ha-601338-m04 --format={{.State.Status}}
	I0717 00:34:32.059329   78696 status.go:330] ha-601338-m04 host status = "Stopped" (err=<nil>)
	I0717 00:34:32.059358   78696 status.go:343] host is not running, skipping remaining checks
	I0717 00:34:32.059366   78696 status.go:257] ha-601338-m04 status: &{Name:ha-601338-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (123.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-601338 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 00:34:32.153379    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
E0717 00:34:59.835828    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
E0717 00:35:17.565747    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-601338 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m3.028080481s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (123.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-601338 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-601338 --control-plane -v=7 --alsologtostderr: (1m10.806298766s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-601338 status -v=7 --alsologtostderr: (1.021425773s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-461719 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-461719 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (55.688813416s)
--- PASS: TestJSONOutput/start/Command (55.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-461719 --output=json --user=testUser
E0717 00:38:54.518403    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-461719 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-461719 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-461719 --output=json --user=testUser: (5.784054769s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-415851 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-415851 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.443994ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5888322b-2b54-4135-9354-950b0c058221","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-415851] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f6726bc-28cb-4ca3-b4c1-dff238c021c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19265"}}
	{"specversion":"1.0","id":"ca129b5c-0ddf-43a2-9052-bc326ae4da80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d1c0a794-0a64-4b28-af17-b96318cd67e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig"}}
	{"specversion":"1.0","id":"3897ae1a-b8e5-45a0-ac07-012935f2dc63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube"}}
	{"specversion":"1.0","id":"21c17b01-cb5a-4602-a807-efb578cef70f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"223b2a4e-e9fc-437d-8dae-8cb188929323","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"71fc4925-7e41-4b5b-967f-51e0b9d4bd71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-415851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-415851
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-179261 --network=
E0717 00:39:32.153235    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-179261 --network=: (36.99319837s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-179261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-179261
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-179261: (2.023516625s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.04s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-589368 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-589368 --network=bridge: (34.532731091s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-589368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-589368
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-589368: (2.004227185s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.56s)

                                                
                                    
x
+
TestKicExistingNetwork (31.95s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-713161 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-713161 --network=existing-network: (29.900758599s)
helpers_test.go:175: Cleaning up "existing-network-713161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-713161
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-713161: (1.894701652s)
--- PASS: TestKicExistingNetwork (31.95s)

                                                
                                    
x
+
TestKicCustomSubnet (37.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-145216 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-145216 --subnet=192.168.60.0/24: (35.065906301s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-145216 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-145216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-145216
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-145216: (2.150694359s)
--- PASS: TestKicCustomSubnet (37.24s)

                                                
                                    
x
+
TestKicStaticIP (33.84s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-390637 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-390637 --static-ip=192.168.200.200: (31.58553034s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-390637 ip
helpers_test.go:175: Cleaning up "static-ip-390637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-390637
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-390637: (2.109717095s)
--- PASS: TestKicStaticIP (33.84s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-538620 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-538620 --driver=docker  --container-runtime=crio: (31.017587757s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-541459 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-541459 --driver=docker  --container-runtime=crio: (34.456610857s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-538620
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-541459
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-541459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-541459
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-541459: (2.014341658s)
helpers_test.go:175: Cleaning up "first-538620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-538620
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-538620: (2.261390867s)
--- PASS: TestMinikubeProfile (70.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-276361 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-276361 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.072549724s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-276361 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-289818 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-289818 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.551454853s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-289818 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-276361 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-276361 --alsologtostderr -v=5: (1.597089621s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-289818 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-289818
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-289818: (1.190002284s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-289818
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-289818: (7.768936704s)
--- PASS: TestMountStart/serial/RestartStopped (8.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-289818 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-312133 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 00:43:54.518102    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:44:32.152971    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-312133 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m24.563557803s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-312133 -- rollout status deployment/busybox: (3.312231134s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- exec busybox-fc5497c4f-4wkf4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- exec busybox-fc5497c4f-l6ftv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- exec busybox-fc5497c4f-4wkf4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- exec busybox-fc5497c4f-l6ftv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- exec busybox-fc5497c4f-4wkf4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- exec busybox-fc5497c4f-l6ftv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- exec busybox-fc5497c4f-4wkf4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- exec busybox-fc5497c4f-4wkf4 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- exec busybox-fc5497c4f-l6ftv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-312133 -- exec busybox-fc5497c4f-l6ftv -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-312133 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-312133 -v 3 --alsologtostderr: (28.720629849s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-312133 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp testdata/cp-test.txt multinode-312133:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp multinode-312133:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2125677917/001/cp-test_multinode-312133.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp multinode-312133:/home/docker/cp-test.txt multinode-312133-m02:/home/docker/cp-test_multinode-312133_multinode-312133-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m02 "sudo cat /home/docker/cp-test_multinode-312133_multinode-312133-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp multinode-312133:/home/docker/cp-test.txt multinode-312133-m03:/home/docker/cp-test_multinode-312133_multinode-312133-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m03 "sudo cat /home/docker/cp-test_multinode-312133_multinode-312133-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp testdata/cp-test.txt multinode-312133-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp multinode-312133-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2125677917/001/cp-test_multinode-312133-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp multinode-312133-m02:/home/docker/cp-test.txt multinode-312133:/home/docker/cp-test_multinode-312133-m02_multinode-312133.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133 "sudo cat /home/docker/cp-test_multinode-312133-m02_multinode-312133.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp multinode-312133-m02:/home/docker/cp-test.txt multinode-312133-m03:/home/docker/cp-test_multinode-312133-m02_multinode-312133-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m03 "sudo cat /home/docker/cp-test_multinode-312133-m02_multinode-312133-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp testdata/cp-test.txt multinode-312133-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp multinode-312133-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2125677917/001/cp-test_multinode-312133-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp multinode-312133-m03:/home/docker/cp-test.txt multinode-312133:/home/docker/cp-test_multinode-312133-m03_multinode-312133.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133 "sudo cat /home/docker/cp-test_multinode-312133-m03_multinode-312133.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 cp multinode-312133-m03:/home/docker/cp-test.txt multinode-312133-m02:/home/docker/cp-test_multinode-312133-m03_multinode-312133-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 ssh -n multinode-312133-m02 "sudo cat /home/docker/cp-test_multinode-312133-m03_multinode-312133-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 node stop m03
E0717 00:45:55.196979    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-312133 node stop m03: (1.201692859s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-312133 status: exit status 7 (506.064733ms)

                                                
                                                
-- stdout --
	multinode-312133
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-312133-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-312133-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-312133 status --alsologtostderr: exit status 7 (498.937654ms)

                                                
                                                
-- stdout --
	multinode-312133
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-312133-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-312133-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:45:56.792024  133396 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:45:56.792244  133396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:45:56.792275  133396 out.go:304] Setting ErrFile to fd 2...
	I0717 00:45:56.792296  133396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:45:56.792545  133396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	I0717 00:45:56.792757  133396 out.go:298] Setting JSON to false
	I0717 00:45:56.792824  133396 mustload.go:65] Loading cluster: multinode-312133
	I0717 00:45:56.792900  133396 notify.go:220] Checking for updates...
	I0717 00:45:56.794250  133396 config.go:182] Loaded profile config "multinode-312133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:45:56.794302  133396 status.go:255] checking status of multinode-312133 ...
	I0717 00:45:56.794928  133396 cli_runner.go:164] Run: docker container inspect multinode-312133 --format={{.State.Status}}
	I0717 00:45:56.811320  133396 status.go:330] multinode-312133 host status = "Running" (err=<nil>)
	I0717 00:45:56.811341  133396 host.go:66] Checking if "multinode-312133" exists ...
	I0717 00:45:56.811716  133396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-312133
	I0717 00:45:56.828925  133396 host.go:66] Checking if "multinode-312133" exists ...
	I0717 00:45:56.829312  133396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:45:56.829376  133396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-312133
	I0717 00:45:56.852383  133396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/multinode-312133/id_rsa Username:docker}
	I0717 00:45:56.947969  133396 ssh_runner.go:195] Run: systemctl --version
	I0717 00:45:56.952066  133396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:45:56.963247  133396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:45:57.019134  133396 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-07-17 00:45:57.007263553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 00:45:57.019702  133396 kubeconfig.go:125] found "multinode-312133" server: "https://192.168.58.2:8443"
	I0717 00:45:57.019737  133396 api_server.go:166] Checking apiserver status ...
	I0717 00:45:57.019788  133396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:45:57.032228  133396 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup
	I0717 00:45:57.042472  133396 api_server.go:182] apiserver freezer: "7:freezer:/docker/88a590e82fd1fe50bec373c2b588e746cc977ee08b5511868b46f2bcf95e4f11/crio/crio-563a8215145e62b7b313ee225885141a77c7d6303c87e7645d050b4061680824"
	I0717 00:45:57.042552  133396 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/88a590e82fd1fe50bec373c2b588e746cc977ee08b5511868b46f2bcf95e4f11/crio/crio-563a8215145e62b7b313ee225885141a77c7d6303c87e7645d050b4061680824/freezer.state
	I0717 00:45:57.052077  133396 api_server.go:204] freezer state: "THAWED"
	I0717 00:45:57.052110  133396 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0717 00:45:57.059878  133396 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0717 00:45:57.059907  133396 status.go:422] multinode-312133 apiserver status = Running (err=<nil>)
	I0717 00:45:57.059920  133396 status.go:257] multinode-312133 status: &{Name:multinode-312133 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:45:57.059938  133396 status.go:255] checking status of multinode-312133-m02 ...
	I0717 00:45:57.060237  133396 cli_runner.go:164] Run: docker container inspect multinode-312133-m02 --format={{.State.Status}}
	I0717 00:45:57.076440  133396 status.go:330] multinode-312133-m02 host status = "Running" (err=<nil>)
	I0717 00:45:57.076462  133396 host.go:66] Checking if "multinode-312133-m02" exists ...
	I0717 00:45:57.076735  133396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-312133-m02
	I0717 00:45:57.093676  133396 host.go:66] Checking if "multinode-312133-m02" exists ...
	I0717 00:45:57.093984  133396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:45:57.094023  133396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-312133-m02
	I0717 00:45:57.110975  133396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19265-2269/.minikube/machines/multinode-312133-m02/id_rsa Username:docker}
	I0717 00:45:57.208106  133396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:45:57.219971  133396 status.go:257] multinode-312133-m02 status: &{Name:multinode-312133-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:45:57.220002  133396 status.go:255] checking status of multinode-312133-m03 ...
	I0717 00:45:57.220315  133396 cli_runner.go:164] Run: docker container inspect multinode-312133-m03 --format={{.State.Status}}
	I0717 00:45:57.236653  133396 status.go:330] multinode-312133-m03 host status = "Stopped" (err=<nil>)
	I0717 00:45:57.236677  133396 status.go:343] host is not running, skipping remaining checks
	I0717 00:45:57.236685  133396 status.go:257] multinode-312133-m03 status: &{Name:multinode-312133-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-312133 node start m03 -v=7 --alsologtostderr: (9.148967635s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-312133
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-312133
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-312133: (24.797404334s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-312133 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-312133 --wait=true -v=8 --alsologtostderr: (1m3.275161553s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-312133
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-312133 node delete m03: (4.612916702s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-312133 stop: (23.660419376s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-312133 status: exit status 7 (86.533232ms)

                                                
                                                
-- stdout --
	multinode-312133
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-312133-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-312133 status --alsologtostderr: exit status 7 (84.692236ms)

                                                
                                                
-- stdout --
	multinode-312133
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-312133-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:48:04.418243  140855 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:48:04.418353  140855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:48:04.418363  140855 out.go:304] Setting ErrFile to fd 2...
	I0717 00:48:04.418368  140855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:48:04.418620  140855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	I0717 00:48:04.418828  140855 out.go:298] Setting JSON to false
	I0717 00:48:04.418859  140855 mustload.go:65] Loading cluster: multinode-312133
	I0717 00:48:04.418928  140855 notify.go:220] Checking for updates...
	I0717 00:48:04.419299  140855 config.go:182] Loaded profile config "multinode-312133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:48:04.419311  140855 status.go:255] checking status of multinode-312133 ...
	I0717 00:48:04.420127  140855 cli_runner.go:164] Run: docker container inspect multinode-312133 --format={{.State.Status}}
	I0717 00:48:04.436000  140855 status.go:330] multinode-312133 host status = "Stopped" (err=<nil>)
	I0717 00:48:04.436031  140855 status.go:343] host is not running, skipping remaining checks
	I0717 00:48:04.436040  140855 status.go:257] multinode-312133 status: &{Name:multinode-312133 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:48:04.436071  140855 status.go:255] checking status of multinode-312133-m02 ...
	I0717 00:48:04.436379  140855 cli_runner.go:164] Run: docker container inspect multinode-312133-m02 --format={{.State.Status}}
	I0717 00:48:04.457717  140855 status.go:330] multinode-312133-m02 host status = "Stopped" (err=<nil>)
	I0717 00:48:04.457743  140855 status.go:343] host is not running, skipping remaining checks
	I0717 00:48:04.457762  140855 status.go:257] multinode-312133-m02 status: &{Name:multinode-312133-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-312133 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 00:48:54.518131    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-312133 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (55.088583512s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-312133 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-312133
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-312133-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-312133-m02 --driver=docker  --container-runtime=crio: exit status 14 (83.371131ms)

                                                
                                                
-- stdout --
	* [multinode-312133-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-312133-m02' is duplicated with machine name 'multinode-312133-m02' in profile 'multinode-312133'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-312133-m03 --driver=docker  --container-runtime=crio
E0717 00:49:32.152844    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-312133-m03 --driver=docker  --container-runtime=crio: (33.432239512s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-312133
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-312133: exit status 80 (294.244989ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-312133 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-312133-m03 already exists in multinode-312133-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-312133-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-312133-m03: (1.998517229s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.86s)

                                                
                                    
x
+
TestPreload (137.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-161716 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-161716 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m40.49113581s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-161716 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-161716 image pull gcr.io/k8s-minikube/busybox: (1.693876044s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-161716
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-161716: (5.766194755s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-161716 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-161716 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (27.265776308s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-161716 image list
helpers_test.go:175: Cleaning up "test-preload-161716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-161716
E0717 00:51:57.566979    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-161716: (2.325662481s)
--- PASS: TestPreload (137.80s)

                                                
                                    
x
+
TestScheduledStopUnix (105.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-098336 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-098336 --memory=2048 --driver=docker  --container-runtime=crio: (29.224782963s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-098336 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-098336 -n scheduled-stop-098336
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-098336 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-098336 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-098336 -n scheduled-stop-098336
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-098336
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-098336 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-098336
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-098336: exit status 7 (66.680951ms)

                                                
                                                
-- stdout --
	scheduled-stop-098336
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-098336 -n scheduled-stop-098336
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-098336 -n scheduled-stop-098336: exit status 7 (62.673522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-098336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-098336
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-098336: (5.240771828s)
--- PASS: TestScheduledStopUnix (105.93s)

                                                
                                    
x
+
TestInsufficientStorage (10.93s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-847601 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-847601 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.481733756s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4093680a-c36d-4121-9fef-63f7bf76b848","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-847601] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e95a06a-5f02-4f38-97de-ac97df1f8d08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19265"}}
	{"specversion":"1.0","id":"ec55ad1f-14d3-40ff-adab-9d582235a27e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"934243b3-55ec-4555-8375-a4232691e8f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig"}}
	{"specversion":"1.0","id":"f6fed52f-1069-4e9b-a419-ae53ebf58f87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube"}}
	{"specversion":"1.0","id":"6830ed1a-e00c-42ce-a68c-034792747126","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"326746f1-9b05-4c98-9a14-e65b89868188","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8df96f9b-e898-4221-9541-baf45933cd2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4e6f99d6-f0c9-4863-b879-7d74a9c412ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8cb7bb4f-9855-423f-9c00-3003706e4562","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f75bc360-b7c7-4d13-a631-f1e00f6c4d35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"49ea4b77-1370-477a-b26e-ab006ac449f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-847601\" primary control-plane node in \"insufficient-storage-847601\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"02633296-be02-4e29-bfa2-549d099b6110","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721064868-19249 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9689eea0-8793-435b-bffe-55c61190eb1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d9bde0a-1422-45ed-bf65-81f2985a62cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-847601 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-847601 --output=json --layout=cluster: exit status 7 (272.341268ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-847601","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-847601","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 00:53:52.628282  158711 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-847601" does not appear in /home/jenkins/minikube-integration/19265-2269/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-847601 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-847601 --output=json --layout=cluster: exit status 7 (278.062971ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-847601","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-847601","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 00:53:52.907859  158772 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-847601" does not appear in /home/jenkins/minikube-integration/19265-2269/kubeconfig
	E0717 00:53:52.917848  158772 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/insufficient-storage-847601/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-847601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-847601
E0717 00:53:54.518029    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-847601: (1.893247269s)
--- PASS: TestInsufficientStorage (10.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (110.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2495497251 start -p running-upgrade-192093 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2495497251 start -p running-upgrade-192093 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.131115627s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-192093 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0717 00:58:54.518019    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 00:59:32.153316    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-192093 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m11.168985811s)
helpers_test.go:175: Cleaning up "running-upgrade-192093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-192093
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-192093: (2.560123254s)
--- PASS: TestRunningBinaryUpgrade (110.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (397.13s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-975675 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-975675 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m14.204040953s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-975675
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-975675: (2.246193454s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-975675 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-975675 status --format={{.Host}}: exit status 7 (95.87076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-975675 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-975675 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.089750142s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-975675 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-975675 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-975675 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (88.851283ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-975675] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-975675
	    minikube start -p kubernetes-upgrade-975675 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9756752 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-975675 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-975675 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-975675 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.470693595s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-975675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-975675
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-975675: (2.844572531s)
--- PASS: TestKubernetesUpgrade (397.13s)

                                                
                                    
x
+
TestMissingContainerUpgrade (140.26s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3953485621 start -p missing-upgrade-242940 --memory=2200 --driver=docker  --container-runtime=crio
E0717 00:54:32.155051    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3953485621 start -p missing-upgrade-242940 --memory=2200 --driver=docker  --container-runtime=crio: (1m11.799605954s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-242940
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-242940: (5.154870061s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-242940
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-242940 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-242940 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m0.557408208s)
helpers_test.go:175: Cleaning up "missing-upgrade-242940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-242940
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-242940: (2.017965731s)
--- PASS: TestMissingContainerUpgrade (140.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-481625 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-481625 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (73.953814ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-481625] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-481625 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-481625 --driver=docker  --container-runtime=crio: (43.216101262s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-481625 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-481625 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-481625 --no-kubernetes --driver=docker  --container-runtime=crio: (4.709901527s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-481625 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-481625 status -o json: exit status 2 (322.465099ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-481625","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-481625
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-481625: (2.086599135s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-481625 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-481625 --no-kubernetes --driver=docker  --container-runtime=crio: (6.69113026s)
--- PASS: TestNoKubernetes/serial/Start (6.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-481625 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-481625 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.124651ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-481625
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-481625: (1.245072303s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-481625 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-481625 --driver=docker  --container-runtime=crio: (7.598129636s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-481625 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-481625 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.318941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (84.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3243059253 start -p stopped-upgrade-606177 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3243059253 start -p stopped-upgrade-606177 --memory=2200 --vm-driver=docker  --container-runtime=crio: (47.133451133s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3243059253 -p stopped-upgrade-606177 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3243059253 -p stopped-upgrade-606177 stop: (2.439771173s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-606177 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-606177 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.684467606s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (84.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-606177
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-606177: (1.529753559s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.53s)

                                                
                                    
x
+
TestPause/serial/Start (62.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-870919 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-870919 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m2.864042009s)
--- PASS: TestPause/serial/Start (62.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-870919 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-870919 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.90181032s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.92s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-870919 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-870919 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-870919 --output=json --layout=cluster: exit status 2 (342.412659ms)

                                                
                                                
-- stdout --
	{"Name":"pause-870919","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-870919","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-870919 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-870919 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.45s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-870919 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-870919 --alsologtostderr -v=5: (2.450212386s)
--- PASS: TestPause/serial/DeletePaused (2.45s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-870919
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-870919: exit status 1 (14.56819ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-870919: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-040622 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-040622 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (242.280439ms)

                                                
                                                
-- stdout --
	* [false-040622] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:01:45.910749  198764 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:01:45.910983  198764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:01:45.911004  198764 out.go:304] Setting ErrFile to fd 2...
	I0717 01:01:45.911022  198764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:01:45.911283  198764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-2269/.minikube/bin
	I0717 01:01:45.911707  198764 out.go:298] Setting JSON to false
	I0717 01:01:45.912691  198764 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6258,"bootTime":1721171848,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 01:01:45.912785  198764 start.go:139] virtualization:  
	I0717 01:01:45.915257  198764 out.go:177] * [false-040622] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 01:01:45.916997  198764 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:01:45.917158  198764 notify.go:220] Checking for updates...
	I0717 01:01:45.921248  198764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:01:45.923262  198764 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-2269/kubeconfig
	I0717 01:01:45.925104  198764 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-2269/.minikube
	I0717 01:01:45.926710  198764 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 01:01:45.928980  198764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:01:45.931391  198764 config.go:182] Loaded profile config "force-systemd-flag-905770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:01:45.931501  198764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:01:45.962027  198764 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 01:01:45.962140  198764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 01:01:46.086545  198764 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-17 01:01:46.076115752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 01:01:46.086673  198764 docker.go:307] overlay module found
	I0717 01:01:46.088799  198764 out.go:177] * Using the docker driver based on user configuration
	I0717 01:01:46.090888  198764 start.go:297] selected driver: docker
	I0717 01:01:46.090903  198764 start.go:901] validating driver "docker" against <nil>
	I0717 01:01:46.090918  198764 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:01:46.093486  198764 out.go:177] 
	W0717 01:01:46.095454  198764 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 01:01:46.097548  198764 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-040622 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-040622" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-040622" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-040622

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040622"

                                                
                                                
----------------------- debugLogs end: false-040622 [took: 4.08614375s] --------------------------------
helpers_test.go:175: Cleaning up "false-040622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-040622
--- PASS: TestNetworkPlugins/group/false (4.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (184.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-290904 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0717 01:03:54.517896    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 01:04:32.152447    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-290904 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m4.783063094s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (184.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-265655 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-265655 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (1m2.294688173s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-290904 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2d790de-cc16-425d-b43b-fccdbb03e84a] Pending
helpers_test.go:344: "busybox" [a2d790de-cc16-425d-b43b-fccdbb03e84a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a2d790de-cc16-425d-b43b-fccdbb03e84a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003606011s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-290904 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-290904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-290904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.239372249s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-290904 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-290904 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-290904 --alsologtostderr -v=3: (12.065422209s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-290904 -n old-k8s-version-290904
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-290904 -n old-k8s-version-290904: exit status 7 (111.412085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-290904 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (142.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-290904 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-290904 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m22.326864656s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-290904 -n old-k8s-version-290904
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (142.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-265655 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5d43f0b5-3b1f-49e7-aa58-d9e884053c6d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5d43f0b5-3b1f-49e7-aa58-d9e884053c6d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003250151s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-265655 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-265655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-265655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.624987726s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-265655 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-265655 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-265655 --alsologtostderr -v=3: (12.096516551s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-265655 -n default-k8s-diff-port-265655
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-265655 -n default-k8s-diff-port-265655: exit status 7 (67.447787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-265655 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-265655 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 01:08:37.567318    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 01:08:54.518627    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-265655 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (4m31.732282142s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-265655 -n default-k8s-diff-port-265655
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-27qqg" [dedc6d98-ded0-499c-a93d-8921e393727c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003622177s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-27qqg" [dedc6d98-ded0-499c-a93d-8921e393727c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007854895s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-290904 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-290904 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-290904 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-290904 -n old-k8s-version-290904
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-290904 -n old-k8s-version-290904: exit status 2 (310.375564ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-290904 -n old-k8s-version-290904
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-290904 -n old-k8s-version-290904: exit status 2 (307.49634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-290904 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-290904 -n old-k8s-version-290904
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-290904 -n old-k8s-version-290904
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-023532 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 01:09:32.153239    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-023532 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (1m1.543917143s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-023532 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [01e012e3-5924-4a20-99cd-d51a503580b4] Pending
helpers_test.go:344: "busybox" [01e012e3-5924-4a20-99cd-d51a503580b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [01e012e3-5924-4a20-99cd-d51a503580b4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00356927s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-023532 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-023532 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-023532 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-023532 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-023532 --alsologtostderr -v=3: (12.004368641s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-023532 -n embed-certs-023532
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-023532 -n embed-certs-023532: exit status 7 (73.110331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-023532 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (288.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-023532 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 01:11:25.365357    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:25.370600    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:25.380814    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:25.401026    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:25.441280    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:25.522364    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:25.682682    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:26.002969    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:26.643177    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:27.923469    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:30.484243    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:35.605425    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:11:45.846154    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-023532 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (4m47.982505801s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-023532 -n embed-certs-023532
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (288.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hvl9n" [4b0e9372-77b0-4fce-99b5-aca16a38d35a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003293545s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hvl9n" [4b0e9372-77b0-4fce-99b5-aca16a38d35a] Running
E0717 01:12:06.326392    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011391005s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-265655 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-265655 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-265655 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-265655 -n default-k8s-diff-port-265655
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-265655 -n default-k8s-diff-port-265655: exit status 2 (317.850503ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-265655 -n default-k8s-diff-port-265655
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-265655 -n default-k8s-diff-port-265655: exit status 2 (314.275646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-265655 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-265655 -n default-k8s-diff-port-265655
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-265655 -n default-k8s-diff-port-265655
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-737770 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 01:12:47.287179    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-737770 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m8.474941906s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-737770 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9ce5fa4b-3903-4454-a14d-205aaf040d47] Pending
helpers_test.go:344: "busybox" [9ce5fa4b-3903-4454-a14d-205aaf040d47] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9ce5fa4b-3903-4454-a14d-205aaf040d47] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003592159s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-737770 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-737770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-737770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.053154662s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-737770 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-737770 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-737770 --alsologtostderr -v=3: (11.967809343s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-737770 -n no-preload-737770
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-737770 -n no-preload-737770: exit status 7 (67.839615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-737770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-737770 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 01:13:54.517777    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 01:14:09.207395    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
E0717 01:14:32.152449    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-737770 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (4m25.936987765s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-737770 -n no-preload-737770
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-2dn5q" [31d02cfd-cb24-4bbe-b68d-536f65bf454b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003588581s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-2dn5q" [31d02cfd-cb24-4bbe-b68d-536f65bf454b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00352417s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-023532 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-023532 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-023532 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-023532 -n embed-certs-023532
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-023532 -n embed-certs-023532: exit status 2 (346.522378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-023532 -n embed-certs-023532
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-023532 -n embed-certs-023532: exit status 2 (293.991283ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-023532 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-023532 -n embed-certs-023532
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-023532 -n embed-certs-023532
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-091830 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 01:16:25.365359    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-091830 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (38.98093859s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-091830 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-091830 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.384346368s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-091830 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-091830 --alsologtostderr -v=3: (1.261888197s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-091830 -n newest-cni-091830
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-091830 -n newest-cni-091830: exit status 7 (62.607603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-091830 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-091830 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 01:16:53.048131    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-091830 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (17.212481366s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-091830 -n newest-cni-091830
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-091830 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-091830 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-091830 -n newest-cni-091830
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-091830 -n newest-cni-091830: exit status 2 (447.414847ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-091830 -n newest-cni-091830
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-091830 -n newest-cni-091830: exit status 2 (327.195176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-091830 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-091830 -n newest-cni-091830
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-091830 -n newest-cni-091830
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0717 01:17:05.201037    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/default-k8s-diff-port-265655/client.crt: no such file or directory
E0717 01:17:05.841377    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/default-k8s-diff-port-265655/client.crt: no such file or directory
E0717 01:17:07.122310    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/default-k8s-diff-port-265655/client.crt: no such file or directory
E0717 01:17:09.682878    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/default-k8s-diff-port-265655/client.crt: no such file or directory
E0717 01:17:14.803929    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/default-k8s-diff-port-265655/client.crt: no such file or directory
E0717 01:17:25.044641    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/default-k8s-diff-port-265655/client.crt: no such file or directory
E0717 01:17:45.525115    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/default-k8s-diff-port-265655/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m0.406140826s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-040622 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-040622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fqz2t" [a4741714-dd64-42cf-aa2b-d9904e6370b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fqz2t" [a4741714-dd64-42cf-aa2b-d9904e6370b9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003983594s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-mx8s6" [51a38d11-1b69-47b5-b410-62cdf627f398] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004169829s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-040622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-mx8s6" [51a38d11-1b69-47b5-b410-62cdf627f398] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004596248s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-737770 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-737770 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-737770 --alsologtostderr -v=1
E0717 01:18:26.486142    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/default-k8s-diff-port-265655/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-737770 --alsologtostderr -v=1: (1.113758746s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-737770 -n no-preload-737770
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-737770 -n no-preload-737770: exit status 2 (369.837256ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-737770 -n no-preload-737770
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-737770 -n no-preload-737770: exit status 2 (370.526946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-737770 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-737770 -n no-preload-737770
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-737770 -n no-preload-737770
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.84s)
E0717 01:23:54.518600    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 01:24:07.300631    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m8.956207836s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0717 01:18:54.517779    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/addons-579136/client.crt: no such file or directory
E0717 01:19:15.197795    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
E0717 01:19:32.153352    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/functional-915248/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m16.638021111s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9h4cx" [edfa40af-50fc-4d3d-be9c-27b2afe8a785] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004792274s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-040622 "pgrep -a kubelet"
E0717 01:19:48.406682    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/default-k8s-diff-port-265655/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-040622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kndbb" [5a96ea8b-57ac-4cd0-a8e5-58c2a930bd6d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kndbb" [5a96ea8b-57ac-4cd0-a8e5-58c2a930bd6d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004385531s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kgw7s" [3e2a60e3-024d-4fbe-b496-7a8114b4fc73] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005538486s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-040622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-040622 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-040622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zbjsq" [32e98872-9888-496d-8a87-cdcde0c8c031] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zbjsq" [32e98872-9888-496d-8a87-cdcde0c8c031] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.004160894s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-040622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m10.594529008s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0717 01:21:25.366009    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/old-k8s-version-290904/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m30.846818504s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-040622 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-040622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pzgjn" [a6fa7534-d26b-4aae-9034-2fa2e7c9ea2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pzgjn" [a6fa7534-d26b-4aae-9034-2fa2e7c9ea2a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.003851781s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-040622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.688297878s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-040622 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-040622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-t42dw" [78322991-9ee8-4863-9618-79869d823475] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-t42dw" [78322991-9ee8-4863-9618-79869d823475] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003400532s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-040622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0717 01:23:06.037186    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
E0717 01:23:06.042449    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
E0717 01:23:06.052683    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
E0717 01:23:06.072925    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
E0717 01:23:06.113181    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
E0717 01:23:06.193434    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
E0717 01:23:06.353552    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
E0717 01:23:06.674587    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
E0717 01:23:07.315065    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
E0717 01:23:08.595674    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
E0717 01:23:11.156235    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-040622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m28.627167545s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xdkgb" [7a12be72-0646-4309-bef6-026577e5a2b9] Running
E0717 01:23:16.276422    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004669285s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-040622 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-040622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nzqgn" [a4fc3ce9-2e67-49b9-9588-21dcd41768a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 01:23:26.337500    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory
E0717 01:23:26.342827    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory
E0717 01:23:26.353069    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory
E0717 01:23:26.373408    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory
E0717 01:23:26.413680    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory
E0717 01:23:26.494104    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory
E0717 01:23:26.517380    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
E0717 01:23:26.654602    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-nzqgn" [a4fc3ce9-2e67-49b9-9588-21dcd41768a3] Running
E0717 01:23:26.975269    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory
E0717 01:23:27.616485    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory
E0717 01:23:28.897364    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory
E0717 01:23:31.458458    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/no-preload-737770/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.003510676s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-040622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-040622 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-040622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-t5pkr" [b5a89264-d2a6-476e-9759-efe2395b21fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-t5pkr" [b5a89264-d2a6-476e-9759-efe2395b21fd] Running
E0717 01:24:27.958368    7584 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-2269/.minikube/profiles/auto-040622/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004395359s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-040622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-040622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (33/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-852192 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-852192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-852192
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-549530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-549530
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-040622 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-040622" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-040622" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-040622

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040622"

                                                
                                                
----------------------- debugLogs end: kubenet-040622 [took: 3.909490019s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-040622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-040622
--- SKIP: TestNetworkPlugins/group/kubenet (4.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-040622 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-040622" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-040622

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-040622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040622"

                                                
                                                
----------------------- debugLogs end: cilium-040622 [took: 5.568026223s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-040622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-040622
--- SKIP: TestNetworkPlugins/group/cilium (5.77s)

                                                
                                    
Copied to clipboard