Test Report: Docker_Linux_crio_arm64 18665

                    
                      dfbe577bff734bd70c7906dfbd0bc89e038b5d72:2024-04-17:34073
                    
                

Test fail (2/327)

Order failed test Duration
30 TestAddons/parallel/Ingress 166.81
32 TestAddons/parallel/MetricsServer 364.68
x
+
TestAddons/parallel/Ingress (166.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-873604 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-873604 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-873604 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c0ee1b93-abde-4529-ae83-b97752c98a99] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c0ee1b93-abde-4529-ae83-b97752c98a99] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003757391s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-873604 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.700862153s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-873604 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.070914366s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-873604 addons disable ingress-dns --alsologtostderr -v=1: (1.486271052s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-873604 addons disable ingress --alsologtostderr -v=1: (7.782930624s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-873604
helpers_test.go:235: (dbg) docker inspect addons-873604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631",
	        "Created": "2024-04-17T19:09:49.125765957Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 694625,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-17T19:09:49.423029844Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f315bc3928e1aa212ec64171b55477a58b0d51266c0204d2cba9566780672a72",
	        "ResolvConfPath": "/var/lib/docker/containers/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631/hostname",
	        "HostsPath": "/var/lib/docker/containers/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631/hosts",
	        "LogPath": "/var/lib/docker/containers/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631-json.log",
	        "Name": "/addons-873604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-873604:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-873604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/78dc28b2ff51c773663d53c5f240b185b7934174d1c6b0e71b638383d84f6917-init/diff:/var/lib/docker/overlay2/05d9d5befaed30420d7a8f984a07ae80fc52626598e920d0ade8d12271084d40/diff",
	                "MergedDir": "/var/lib/docker/overlay2/78dc28b2ff51c773663d53c5f240b185b7934174d1c6b0e71b638383d84f6917/merged",
	                "UpperDir": "/var/lib/docker/overlay2/78dc28b2ff51c773663d53c5f240b185b7934174d1c6b0e71b638383d84f6917/diff",
	                "WorkDir": "/var/lib/docker/overlay2/78dc28b2ff51c773663d53c5f240b185b7934174d1c6b0e71b638383d84f6917/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-873604",
	                "Source": "/var/lib/docker/volumes/addons-873604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-873604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-873604",
	                "name.minikube.sigs.k8s.io": "addons-873604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "59a62741523b0d53182d92c517313e971ec25e810c63d437a531cc275f9f2bae",
	            "SandboxKey": "/var/run/docker/netns/59a62741523b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33542"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33541"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-873604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "0a9713c7292f458ca427fec72e9fdc386354489a98c0b20a7bca9591b589d0e2",
	                    "EndpointID": "6a1c0e59d2410f76f96505693890c8286a22d3834d47be002500ac43f5895edf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-873604",
	                        "3fc24619954a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-873604 -n addons-873604
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-873604 logs -n 25: (1.545753919s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| delete  | -p download-only-545184                                                                     | download-only-545184   | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| delete  | -p download-only-251262                                                                     | download-only-251262   | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| delete  | -p download-only-545184                                                                     | download-only-545184   | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| start   | --download-only -p                                                                          | download-docker-474356 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC |                     |
	|         | download-docker-474356                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p download-docker-474356                                                                   | download-docker-474356 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-250898   | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC |                     |
	|         | binary-mirror-250898                                                                        |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |                |                     |                     |
	|         | http://127.0.0.1:33811                                                                      |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-250898                                                                     | binary-mirror-250898   | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| addons  | disable dashboard -p                                                                        | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC |                     |
	|         | addons-873604                                                                               |                        |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC |                     |
	|         | addons-873604                                                                               |                        |         |                |                     |                     |
	| start   | -p addons-873604 --wait=true                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |                |                     |                     |
	|         | --addons=registry                                                                           |                        |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	|         | -p addons-873604                                                                            |                        |         |                |                     |                     |
	| ip      | addons-873604 ip                                                                            | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	| addons  | addons-873604 addons disable                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	|         | addons-873604                                                                               |                        |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	|         | -p addons-873604                                                                            |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| ssh     | addons-873604 ssh cat                                                                       | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	|         | /opt/local-path-provisioner/pvc-814c2d54-9fef-4b2f-bb69-2330200001c7_default_test-pvc/file1 |                        |         |                |                     |                     |
	| addons  | addons-873604 addons disable                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:14 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-873604 addons                                                                        | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:14 UTC | 17 Apr 24 19:14 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-873604 addons                                                                        | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:14 UTC | 17 Apr 24 19:14 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:14 UTC | 17 Apr 24 19:14 UTC |
	|         | addons-873604                                                                               |                        |         |                |                     |                     |
	| ssh     | addons-873604 ssh curl -s                                                                   | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:14 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |                |                     |                     |
	| ip      | addons-873604 ip                                                                            | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:16 UTC | 17 Apr 24 19:16 UTC |
	| addons  | addons-873604 addons disable                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:17 UTC | 17 Apr 24 19:17 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-873604 addons disable                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:17 UTC | 17 Apr 24 19:17 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 19:09:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 19:09:24.963608  694161 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:09:24.963778  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:09:24.963786  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:09:24.963791  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:09:24.964107  694161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
	I0417 19:09:24.964776  694161 out.go:298] Setting JSON to false
	I0417 19:09:24.965912  694161 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10312,"bootTime":1713370653,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0417 19:09:24.965985  694161 start.go:139] virtualization:  
	I0417 19:09:24.969266  694161 out.go:177] * [addons-873604] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0417 19:09:24.972476  694161 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:09:24.975011  694161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:09:24.972542  694161 notify.go:220] Checking for updates...
	I0417 19:09:24.977063  694161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	I0417 19:09:24.979166  694161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	I0417 19:09:24.981356  694161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0417 19:09:24.983887  694161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:09:24.986496  694161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:09:25.017092  694161 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0417 19:09:25.017230  694161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:09:25.080357  694161 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-17 19:09:25.068772477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:09:25.080496  694161 docker.go:295] overlay module found
	I0417 19:09:25.083264  694161 out.go:177] * Using the docker driver based on user configuration
	I0417 19:09:25.085875  694161 start.go:297] selected driver: docker
	I0417 19:09:25.085901  694161 start.go:901] validating driver "docker" against <nil>
	I0417 19:09:25.085916  694161 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:09:25.086605  694161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:09:25.147547  694161 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-17 19:09:25.138652526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:09:25.147723  694161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 19:09:25.147953  694161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:09:25.150664  694161 out.go:177] * Using Docker driver with root privileges
	I0417 19:09:25.153267  694161 cni.go:84] Creating CNI manager for ""
	I0417 19:09:25.153293  694161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0417 19:09:25.153304  694161 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0417 19:09:25.153409  694161 start.go:340] cluster config:
	{Name:addons-873604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-873604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:09:25.157220  694161 out.go:177] * Starting "addons-873604" primary control-plane node in "addons-873604" cluster
	I0417 19:09:25.159845  694161 cache.go:121] Beginning downloading kic base image for docker with crio
	I0417 19:09:25.162538  694161 out.go:177] * Pulling base image v0.0.43-1713236840-18649 ...
	I0417 19:09:25.165381  694161 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:09:25.165446  694161 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0417 19:09:25.165468  694161 cache.go:56] Caching tarball of preloaded images
	I0417 19:09:25.165508  694161 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e in local docker daemon
	I0417 19:09:25.165587  694161 preload.go:173] Found /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0417 19:09:25.165599  694161 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 19:09:25.165971  694161 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/config.json ...
	I0417 19:09:25.165995  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/config.json: {Name:mk21e21ce2e4cd3b7058fdf531f3edbc9d07af39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:25.179811  694161 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e to local cache
	I0417 19:09:25.179942  694161 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e in local cache directory
	I0417 19:09:25.179968  694161 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e in local cache directory, skipping pull
	I0417 19:09:25.179973  694161 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e exists in cache, skipping pull
	I0417 19:09:25.179985  694161 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e as a tarball
	I0417 19:09:25.179995  694161 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e from local cache
	I0417 19:09:41.842808  694161 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e from cached tarball
	I0417 19:09:41.842846  694161 cache.go:194] Successfully downloaded all kic artifacts
	I0417 19:09:41.842887  694161 start.go:360] acquireMachinesLock for addons-873604: {Name:mk9f3554f23e850971a17136b150084dad1ed5dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:09:41.843015  694161 start.go:364] duration metric: took 104.67µs to acquireMachinesLock for "addons-873604"
	I0417 19:09:41.843046  694161 start.go:93] Provisioning new machine with config: &{Name:addons-873604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-873604 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 19:09:41.843127  694161 start.go:125] createHost starting for "" (driver="docker")
	I0417 19:09:41.846368  694161 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0417 19:09:41.846610  694161 start.go:159] libmachine.API.Create for "addons-873604" (driver="docker")
	I0417 19:09:41.846644  694161 client.go:168] LocalClient.Create starting
	I0417 19:09:41.846757  694161 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem
	I0417 19:09:42.095468  694161 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/cert.pem
	I0417 19:09:42.558138  694161 cli_runner.go:164] Run: docker network inspect addons-873604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0417 19:09:42.573000  694161 cli_runner.go:211] docker network inspect addons-873604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0417 19:09:42.573100  694161 network_create.go:281] running [docker network inspect addons-873604] to gather additional debugging logs...
	I0417 19:09:42.573124  694161 cli_runner.go:164] Run: docker network inspect addons-873604
	W0417 19:09:42.591114  694161 cli_runner.go:211] docker network inspect addons-873604 returned with exit code 1
	I0417 19:09:42.591149  694161 network_create.go:284] error running [docker network inspect addons-873604]: docker network inspect addons-873604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-873604 not found
	I0417 19:09:42.591164  694161 network_create.go:286] output of [docker network inspect addons-873604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-873604 not found
	
	** /stderr **
	I0417 19:09:42.591282  694161 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0417 19:09:42.607103  694161 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400299a2d0}
	I0417 19:09:42.607146  694161 network_create.go:124] attempt to create docker network addons-873604 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0417 19:09:42.607210  694161 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-873604 addons-873604
	I0417 19:09:42.689853  694161 network_create.go:108] docker network addons-873604 192.168.49.0/24 created
	I0417 19:09:42.689886  694161 kic.go:121] calculated static IP "192.168.49.2" for the "addons-873604" container
	I0417 19:09:42.689958  694161 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0417 19:09:42.702542  694161 cli_runner.go:164] Run: docker volume create addons-873604 --label name.minikube.sigs.k8s.io=addons-873604 --label created_by.minikube.sigs.k8s.io=true
	I0417 19:09:42.717123  694161 oci.go:103] Successfully created a docker volume addons-873604
	I0417 19:09:42.717222  694161 cli_runner.go:164] Run: docker run --rm --name addons-873604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-873604 --entrypoint /usr/bin/test -v addons-873604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e -d /var/lib
	I0417 19:09:44.764651  694161 cli_runner.go:217] Completed: docker run --rm --name addons-873604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-873604 --entrypoint /usr/bin/test -v addons-873604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e -d /var/lib: (2.047387664s)
	I0417 19:09:44.764683  694161 oci.go:107] Successfully prepared a docker volume addons-873604
	I0417 19:09:44.764710  694161 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:09:44.764729  694161 kic.go:194] Starting extracting preloaded images to volume ...
	I0417 19:09:44.764823  694161 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-873604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e -I lz4 -xf /preloaded.tar -C /extractDir
	I0417 19:09:49.057467  694161 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-873604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e -I lz4 -xf /preloaded.tar -C /extractDir: (4.292594327s)
	I0417 19:09:49.057509  694161 kic.go:203] duration metric: took 4.292776197s to extract preloaded images to volume ...
	W0417 19:09:49.057649  694161 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0417 19:09:49.057758  694161 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0417 19:09:49.112878  694161 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-873604 --name addons-873604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-873604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-873604 --network addons-873604 --ip 192.168.49.2 --volume addons-873604:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e
	I0417 19:09:49.434684  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Running}}
	I0417 19:09:49.457390  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:09:49.476716  694161 cli_runner.go:164] Run: docker exec addons-873604 stat /var/lib/dpkg/alternatives/iptables
	I0417 19:09:49.558792  694161 oci.go:144] the created container "addons-873604" has a running status.
	I0417 19:09:49.558823  694161 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa...
	I0417 19:09:50.362407  694161 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0417 19:09:50.380907  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:09:50.397804  694161 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0417 19:09:50.397829  694161 kic_runner.go:114] Args: [docker exec --privileged addons-873604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0417 19:09:50.451523  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:09:50.480714  694161 machine.go:94] provisionDockerMachine start ...
	I0417 19:09:50.480883  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:50.498931  694161 main.go:141] libmachine: Using SSH client type: native
	I0417 19:09:50.499208  694161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33542 <nil> <nil>}
	I0417 19:09:50.499217  694161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0417 19:09:50.647969  694161 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-873604
	
	I0417 19:09:50.647996  694161 ubuntu.go:169] provisioning hostname "addons-873604"
	I0417 19:09:50.648060  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:50.664784  694161 main.go:141] libmachine: Using SSH client type: native
	I0417 19:09:50.665033  694161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33542 <nil> <nil>}
	I0417 19:09:50.665050  694161 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-873604 && echo "addons-873604" | sudo tee /etc/hostname
	I0417 19:09:50.818507  694161 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-873604
	
	I0417 19:09:50.818587  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:50.835045  694161 main.go:141] libmachine: Using SSH client type: native
	I0417 19:09:50.835294  694161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33542 <nil> <nil>}
	I0417 19:09:50.835315  694161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-873604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-873604/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-873604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 19:09:50.972315  694161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 19:09:50.972344  694161 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18665-688109/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-688109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-688109/.minikube}
	I0417 19:09:50.972372  694161 ubuntu.go:177] setting up certificates
	I0417 19:09:50.972409  694161 provision.go:84] configureAuth start
	I0417 19:09:50.972474  694161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-873604
	I0417 19:09:50.988661  694161 provision.go:143] copyHostCerts
	I0417 19:09:50.988756  694161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-688109/.minikube/ca.pem (1078 bytes)
	I0417 19:09:50.988887  694161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-688109/.minikube/cert.pem (1123 bytes)
	I0417 19:09:50.988959  694161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-688109/.minikube/key.pem (1675 bytes)
	I0417 19:09:50.989028  694161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-688109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca-key.pem org=jenkins.addons-873604 san=[127.0.0.1 192.168.49.2 addons-873604 localhost minikube]
	I0417 19:09:51.400106  694161 provision.go:177] copyRemoteCerts
	I0417 19:09:51.400172  694161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 19:09:51.400211  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:51.415117  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:09:51.513429  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 19:09:51.537796  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0417 19:09:51.562484  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0417 19:09:51.586537  694161 provision.go:87] duration metric: took 614.110058ms to configureAuth
	I0417 19:09:51.586563  694161 ubuntu.go:193] setting minikube options for container-runtime
	I0417 19:09:51.586759  694161 config.go:182] Loaded profile config "addons-873604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:09:51.586863  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:51.602459  694161 main.go:141] libmachine: Using SSH client type: native
	I0417 19:09:51.602721  694161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33542 <nil> <nil>}
	I0417 19:09:51.602735  694161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 19:09:51.844026  694161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 19:09:51.844098  694161 machine.go:97] duration metric: took 1.363301107s to provisionDockerMachine
	I0417 19:09:51.844122  694161 client.go:171] duration metric: took 9.99746643s to LocalClient.Create
	I0417 19:09:51.844148  694161 start.go:167] duration metric: took 9.997537616s to libmachine.API.Create "addons-873604"
	I0417 19:09:51.844189  694161 start.go:293] postStartSetup for "addons-873604" (driver="docker")
	I0417 19:09:51.844215  694161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 19:09:51.844304  694161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 19:09:51.844438  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:51.861022  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:09:51.961927  694161 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 19:09:51.965211  694161 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0417 19:09:51.965246  694161 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0417 19:09:51.965257  694161 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0417 19:09:51.965264  694161 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0417 19:09:51.965275  694161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-688109/.minikube/addons for local assets ...
	I0417 19:09:51.965347  694161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-688109/.minikube/files for local assets ...
	I0417 19:09:51.965379  694161 start.go:296] duration metric: took 121.171533ms for postStartSetup
	I0417 19:09:51.965717  694161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-873604
	I0417 19:09:51.980683  694161 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/config.json ...
	I0417 19:09:51.980975  694161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:09:51.981028  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:51.995950  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:09:52.089217  694161 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0417 19:09:52.093637  694161 start.go:128] duration metric: took 10.250495111s to createHost
	I0417 19:09:52.093661  694161 start.go:83] releasing machines lock for "addons-873604", held for 10.25063269s
	I0417 19:09:52.093730  694161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-873604
	I0417 19:09:52.108788  694161 ssh_runner.go:195] Run: cat /version.json
	I0417 19:09:52.108849  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:52.108866  694161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 19:09:52.108921  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:52.128282  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:09:52.138493  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:09:52.223893  694161 ssh_runner.go:195] Run: systemctl --version
	I0417 19:09:52.336035  694161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 19:09:52.477393  694161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0417 19:09:52.481753  694161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 19:09:52.502207  694161 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0417 19:09:52.502369  694161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 19:09:52.540312  694161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0417 19:09:52.540333  694161 start.go:494] detecting cgroup driver to use...
	I0417 19:09:52.540365  694161 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0417 19:09:52.540441  694161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 19:09:52.558312  694161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 19:09:52.570537  694161 docker.go:217] disabling cri-docker service (if available) ...
	I0417 19:09:52.570599  694161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 19:09:52.584893  694161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 19:09:52.604347  694161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 19:09:52.689853  694161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 19:09:52.793560  694161 docker.go:233] disabling docker service ...
	I0417 19:09:52.793644  694161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 19:09:52.813607  694161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 19:09:52.825407  694161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 19:09:52.916033  694161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 19:09:53.008303  694161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 19:09:53.022307  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 19:09:53.039760  694161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 19:09:53.039838  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.050324  694161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 19:09:53.050394  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.060112  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.070226  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.080331  694161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 19:09:53.089381  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.099115  694161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.114224  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.123669  694161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 19:09:53.132262  694161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 19:09:53.140587  694161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:09:53.227395  694161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 19:09:53.334187  694161 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 19:09:53.334307  694161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 19:09:53.337831  694161 start.go:562] Will wait 60s for crictl version
	I0417 19:09:53.337940  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:09:53.341590  694161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 19:09:53.380802  694161 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0417 19:09:53.381018  694161 ssh_runner.go:195] Run: crio --version
	I0417 19:09:53.424797  694161 ssh_runner.go:195] Run: crio --version
	I0417 19:09:53.471021  694161 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.24.6 ...
	I0417 19:09:53.472703  694161 cli_runner.go:164] Run: docker network inspect addons-873604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0417 19:09:53.486410  694161 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0417 19:09:53.490022  694161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 19:09:53.501028  694161 kubeadm.go:877] updating cluster {Name:addons-873604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-873604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 19:09:53.501156  694161 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:09:53.501237  694161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:09:53.591376  694161 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:09:53.591398  694161 crio.go:433] Images already preloaded, skipping extraction
	I0417 19:09:53.591458  694161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:09:53.630564  694161 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:09:53.630588  694161 cache_images.go:84] Images are preloaded, skipping loading
	I0417 19:09:53.630597  694161 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0-rc.2 crio true true} ...
	I0417 19:09:53.630693  694161 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-873604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-873604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 19:09:53.630775  694161 ssh_runner.go:195] Run: crio config
	I0417 19:09:53.678502  694161 cni.go:84] Creating CNI manager for ""
	I0417 19:09:53.678527  694161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0417 19:09:53.678542  694161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 19:09:53.678566  694161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-873604 NodeName:addons-873604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 19:09:53.678725  694161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-873604"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 19:09:53.678796  694161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 19:09:53.687479  694161 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 19:09:53.687546  694161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0417 19:09:53.696224  694161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0417 19:09:53.714264  694161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 19:09:53.732898  694161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0417 19:09:53.751430  694161 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0417 19:09:53.754843  694161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 19:09:53.765912  694161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:09:53.848635  694161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:09:53.862733  694161 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604 for IP: 192.168.49.2
	I0417 19:09:53.862797  694161 certs.go:194] generating shared ca certs ...
	I0417 19:09:53.862828  694161 certs.go:226] acquiring lock for ca certs: {Name:mk1d5cdf338d4da229e545e5e63248dcc873d21d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:53.862980  694161 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-688109/.minikube/ca.key
	I0417 19:09:54.045381  694161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-688109/.minikube/ca.crt ...
	I0417 19:09:54.045416  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/ca.crt: {Name:mk93cd65d0c6dce70744e607a147811e84a5870d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.046229  694161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-688109/.minikube/ca.key ...
	I0417 19:09:54.046249  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/ca.key: {Name:mkd68f826a3f0fe60b7fe39e9894fdc502e1006d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.046353  694161 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.key
	I0417 19:09:54.619411  694161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.crt ...
	I0417 19:09:54.619450  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.crt: {Name:mkcf59b20b0c3249f1bca795a6e74d934bed98f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.619667  694161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.key ...
	I0417 19:09:54.619683  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.key: {Name:mk686c4b044fb5ee0d53aa4e8e625235d31d933f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.620394  694161 certs.go:256] generating profile certs ...
	I0417 19:09:54.620465  694161 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.key
	I0417 19:09:54.620485  694161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt with IP's: []
	I0417 19:09:54.825180  694161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt ...
	I0417 19:09:54.825209  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: {Name:mk61512f97c3a1aaa9ce05997d4f70e7a008ab1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.825404  694161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.key ...
	I0417 19:09:54.825420  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.key: {Name:mk42d4d44866af94905680123bce0f356f164dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.826053  694161 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key.5856b0aa
	I0417 19:09:54.826079  694161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt.5856b0aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0417 19:09:55.183901  694161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt.5856b0aa ...
	I0417 19:09:55.183931  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt.5856b0aa: {Name:mkd86a7b83a2852731ea06780d6308bc7c3bfafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:55.184131  694161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key.5856b0aa ...
	I0417 19:09:55.184146  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key.5856b0aa: {Name:mk8731f1eae302d33cee016b798929c1117d2483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:55.184230  694161 certs.go:381] copying /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt.5856b0aa -> /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt
	I0417 19:09:55.184316  694161 certs.go:385] copying /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key.5856b0aa -> /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key
	I0417 19:09:55.184370  694161 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.key
	I0417 19:09:55.184408  694161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.crt with IP's: []
	I0417 19:09:55.350368  694161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.crt ...
	I0417 19:09:55.350399  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.crt: {Name:mk73336e1835358968b7d66605ff7f94d6435bec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:55.350590  694161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.key ...
	I0417 19:09:55.350605  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.key: {Name:mkf6cf392164f5a780c94e94a48e3e83db8574be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:55.350793  694161 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 19:09:55.350838  694161 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem (1078 bytes)
	I0417 19:09:55.350878  694161 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/cert.pem (1123 bytes)
	I0417 19:09:55.350907  694161 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/key.pem (1675 bytes)
	I0417 19:09:55.351499  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 19:09:55.378424  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 19:09:55.404046  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 19:09:55.428519  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 19:09:55.453260  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0417 19:09:55.477437  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 19:09:55.501871  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 19:09:55.526404  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0417 19:09:55.551335  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 19:09:55.576785  694161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 19:09:55.595384  694161 ssh_runner.go:195] Run: openssl version
	I0417 19:09:55.600830  694161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 19:09:55.610401  694161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:09:55.613931  694161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 19:09 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:09:55.613998  694161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:09:55.620906  694161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 19:09:55.630445  694161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:09:55.634051  694161 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 19:09:55.634099  694161 kubeadm.go:391] StartCluster: {Name:addons-873604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-873604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:09:55.634226  694161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 19:09:55.634322  694161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:09:55.673154  694161 cri.go:89] found id: ""
	I0417 19:09:55.673273  694161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0417 19:09:55.682155  694161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0417 19:09:55.691129  694161 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0417 19:09:55.691226  694161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0417 19:09:55.700053  694161 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0417 19:09:55.700075  694161 kubeadm.go:156] found existing configuration files:
	
	I0417 19:09:55.700145  694161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0417 19:09:55.709260  694161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0417 19:09:55.709353  694161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0417 19:09:55.717896  694161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0417 19:09:55.726678  694161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0417 19:09:55.726745  694161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0417 19:09:55.735788  694161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0417 19:09:55.745158  694161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0417 19:09:55.745226  694161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0417 19:09:55.753854  694161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0417 19:09:55.762767  694161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0417 19:09:55.762878  694161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0417 19:09:55.771438  694161 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0417 19:09:55.818215  694161 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0417 19:09:55.818465  694161 kubeadm.go:309] [preflight] Running pre-flight checks
	I0417 19:09:55.859328  694161 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0417 19:09:55.859400  694161 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1057-aws
	I0417 19:09:55.859439  694161 kubeadm.go:309] OS: Linux
	I0417 19:09:55.859494  694161 kubeadm.go:309] CGROUPS_CPU: enabled
	I0417 19:09:55.859544  694161 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0417 19:09:55.859593  694161 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0417 19:09:55.859642  694161 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0417 19:09:55.859690  694161 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0417 19:09:55.859739  694161 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0417 19:09:55.859785  694161 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0417 19:09:55.859834  694161 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0417 19:09:55.859881  694161 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0417 19:09:55.930155  694161 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0417 19:09:55.930267  694161 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0417 19:09:55.930360  694161 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0417 19:09:56.191102  694161 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0417 19:09:56.193987  694161 out.go:204]   - Generating certificates and keys ...
	I0417 19:09:56.194095  694161 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0417 19:09:56.194174  694161 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0417 19:09:56.624005  694161 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0417 19:09:57.316330  694161 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0417 19:09:57.776706  694161 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0417 19:09:58.506835  694161 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0417 19:09:58.796796  694161 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0417 19:09:58.797127  694161 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-873604 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0417 19:09:59.088963  694161 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0417 19:09:59.089258  694161 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-873604 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0417 19:09:59.485242  694161 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0417 19:09:59.889216  694161 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0417 19:10:00.470845  694161 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0417 19:10:00.485669  694161 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0417 19:10:01.378507  694161 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0417 19:10:01.982884  694161 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0417 19:10:02.268403  694161 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0417 19:10:02.722430  694161 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0417 19:10:03.104831  694161 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0417 19:10:03.105777  694161 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0417 19:10:03.109186  694161 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0417 19:10:03.111961  694161 out.go:204]   - Booting up control plane ...
	I0417 19:10:03.112075  694161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0417 19:10:03.112163  694161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0417 19:10:03.114553  694161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0417 19:10:03.126333  694161 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0417 19:10:03.127511  694161 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0417 19:10:03.127567  694161 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0417 19:10:03.228776  694161 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0417 19:10:03.228865  694161 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0417 19:10:04.243034  694161 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.014338995s
	I0417 19:10:04.243120  694161 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0417 19:10:10.247345  694161 kubeadm.go:309] [api-check] The API server is healthy after 6.002158165s
	I0417 19:10:10.264987  694161 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0417 19:10:10.285762  694161 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0417 19:10:10.320943  694161 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0417 19:10:10.321166  694161 kubeadm.go:309] [mark-control-plane] Marking the node addons-873604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0417 19:10:10.331103  694161 kubeadm.go:309] [bootstrap-token] Using token: f332dj.4fi44gqjkjxhwrp9
	I0417 19:10:10.333195  694161 out.go:204]   - Configuring RBAC rules ...
	I0417 19:10:10.333334  694161 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0417 19:10:10.337964  694161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0417 19:10:10.347103  694161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0417 19:10:10.350937  694161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0417 19:10:10.357386  694161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0417 19:10:10.361086  694161 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0417 19:10:10.651393  694161 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0417 19:10:11.098674  694161 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0417 19:10:11.652717  694161 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0417 19:10:11.653986  694161 kubeadm.go:309] 
	I0417 19:10:11.654057  694161 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0417 19:10:11.654068  694161 kubeadm.go:309] 
	I0417 19:10:11.654143  694161 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0417 19:10:11.654151  694161 kubeadm.go:309] 
	I0417 19:10:11.654177  694161 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0417 19:10:11.654237  694161 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0417 19:10:11.654293  694161 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0417 19:10:11.654306  694161 kubeadm.go:309] 
	I0417 19:10:11.654357  694161 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0417 19:10:11.654365  694161 kubeadm.go:309] 
	I0417 19:10:11.654411  694161 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0417 19:10:11.654420  694161 kubeadm.go:309] 
	I0417 19:10:11.654470  694161 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0417 19:10:11.654548  694161 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0417 19:10:11.654618  694161 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0417 19:10:11.654626  694161 kubeadm.go:309] 
	I0417 19:10:11.654707  694161 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0417 19:10:11.654784  694161 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0417 19:10:11.654792  694161 kubeadm.go:309] 
	I0417 19:10:11.654873  694161 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token f332dj.4fi44gqjkjxhwrp9 \
	I0417 19:10:11.654975  694161 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64e6df13a2dfd9033b0e1d5e98b3cfd2efe34f46e411a8fa9e48d2f90687e6a8 \
	I0417 19:10:11.655000  694161 kubeadm.go:309] 	--control-plane 
	I0417 19:10:11.655005  694161 kubeadm.go:309] 
	I0417 19:10:11.655090  694161 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0417 19:10:11.655098  694161 kubeadm.go:309] 
	I0417 19:10:11.655177  694161 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token f332dj.4fi44gqjkjxhwrp9 \
	I0417 19:10:11.655280  694161 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64e6df13a2dfd9033b0e1d5e98b3cfd2efe34f46e411a8fa9e48d2f90687e6a8 
	I0417 19:10:11.658815  694161 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1057-aws\n", err: exit status 1
	I0417 19:10:11.658933  694161 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0417 19:10:11.658953  694161 cni.go:84] Creating CNI manager for ""
	I0417 19:10:11.658961  694161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0417 19:10:11.661539  694161 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0417 19:10:11.663688  694161 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0417 19:10:11.667622  694161 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl ...
	I0417 19:10:11.667647  694161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0417 19:10:11.687434  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0417 19:10:11.983629  694161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0417 19:10:11.983692  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:11.983795  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-873604 minikube.k8s.io/updated_at=2024_04_17T19_10_11_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3 minikube.k8s.io/name=addons-873604 minikube.k8s.io/primary=true
	I0417 19:10:12.162299  694161 ops.go:34] apiserver oom_adj: -16
	I0417 19:10:12.162392  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:12.662843  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:13.162486  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:13.663177  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:14.162571  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:14.662489  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:15.163090  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:15.663278  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:16.162484  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:16.662935  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:17.163308  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:17.662634  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:18.163007  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:18.662559  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:19.163438  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:19.662558  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:20.162530  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:20.662873  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:21.163519  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:21.663100  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:22.162980  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:22.662551  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:23.162549  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:23.662657  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:24.162597  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:24.663394  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:25.162860  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:25.268925  694161 kubeadm.go:1107] duration metric: took 13.285299404s to wait for elevateKubeSystemPrivileges
	W0417 19:10:25.268959  694161 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0417 19:10:25.268967  694161 kubeadm.go:393] duration metric: took 29.634872006s to StartCluster
	I0417 19:10:25.268983  694161 settings.go:142] acquiring lock: {Name:mkca3c46bd90bd66268d8c5f3823c8842153ebd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:10:25.269101  694161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18665-688109/kubeconfig
	I0417 19:10:25.269573  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/kubeconfig: {Name:mk9d670643a338e225544addd9a80feeadd71982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:10:25.270578  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0417 19:10:25.270618  694161 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 19:10:25.273454  694161 out.go:177] * Verifying Kubernetes components...
	I0417 19:10:25.270849  694161 config.go:182] Loaded profile config "addons-873604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:10:25.270859  694161 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0417 19:10:25.275367  694161 addons.go:69] Setting yakd=true in profile "addons-873604"
	I0417 19:10:25.275381  694161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:10:25.275397  694161 addons.go:234] Setting addon yakd=true in "addons-873604"
	I0417 19:10:25.275428  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.275471  694161 addons.go:69] Setting ingress-dns=true in profile "addons-873604"
	I0417 19:10:25.275493  694161 addons.go:234] Setting addon ingress-dns=true in "addons-873604"
	I0417 19:10:25.275523  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.275910  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.275916  694161 addons.go:69] Setting inspektor-gadget=true in profile "addons-873604"
	I0417 19:10:25.275934  694161 addons.go:234] Setting addon inspektor-gadget=true in "addons-873604"
	I0417 19:10:25.275951  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.276282  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.277235  694161 addons.go:69] Setting metrics-server=true in profile "addons-873604"
	I0417 19:10:25.277286  694161 addons.go:69] Setting cloud-spanner=true in profile "addons-873604"
	I0417 19:10:25.277305  694161 addons.go:234] Setting addon metrics-server=true in "addons-873604"
	I0417 19:10:25.277310  694161 addons.go:234] Setting addon cloud-spanner=true in "addons-873604"
	I0417 19:10:25.277337  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.277342  694161 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-873604"
	I0417 19:10:25.277376  694161 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-873604"
	I0417 19:10:25.277391  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.277751  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.277337  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.278084  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.277751  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.286027  694161 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-873604"
	I0417 19:10:25.286067  694161 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-873604"
	I0417 19:10:25.286111  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.286535  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.288671  694161 addons.go:69] Setting default-storageclass=true in profile "addons-873604"
	I0417 19:10:25.288714  694161 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-873604"
	I0417 19:10:25.289006  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.289447  694161 addons.go:69] Setting registry=true in profile "addons-873604"
	I0417 19:10:25.289482  694161 addons.go:234] Setting addon registry=true in "addons-873604"
	I0417 19:10:25.289521  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.289904  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.315876  694161 addons.go:69] Setting storage-provisioner=true in profile "addons-873604"
	I0417 19:10:25.315927  694161 addons.go:234] Setting addon storage-provisioner=true in "addons-873604"
	I0417 19:10:25.315965  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.316459  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.328653  694161 addons.go:69] Setting gcp-auth=true in profile "addons-873604"
	I0417 19:10:25.328704  694161 mustload.go:65] Loading cluster: addons-873604
	I0417 19:10:25.328872  694161 config.go:182] Loaded profile config "addons-873604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:10:25.329111  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.344718  694161 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-873604"
	I0417 19:10:25.344768  694161 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-873604"
	I0417 19:10:25.345062  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.356615  694161 addons.go:69] Setting ingress=true in profile "addons-873604"
	I0417 19:10:25.356657  694161 addons.go:234] Setting addon ingress=true in "addons-873604"
	I0417 19:10:25.356706  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.357122  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.358272  694161 addons.go:69] Setting volumesnapshots=true in profile "addons-873604"
	I0417 19:10:25.358309  694161 addons.go:234] Setting addon volumesnapshots=true in "addons-873604"
	I0417 19:10:25.358346  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.358761  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.275911  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.419473  694161 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0417 19:10:25.434724  694161 out.go:177]   - Using image docker.io/registry:2.8.3
	I0417 19:10:25.438643  694161 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0417 19:10:25.448318  694161 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0417 19:10:25.448352  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0417 19:10:25.448454  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.447769  694161 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0417 19:10:25.465667  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0417 19:10:25.465767  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.447780  694161 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0417 19:10:25.447785  694161 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0417 19:10:25.447789  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0417 19:10:25.447793  694161 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0417 19:10:25.495766  694161 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0417 19:10:25.496013  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.497486  694161 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-873604"
	I0417 19:10:25.502462  694161 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0417 19:10:25.502521  694161 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0417 19:10:25.502552  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0417 19:10:25.506059  694161 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0417 19:10:25.507009  694161 addons.go:234] Setting addon default-storageclass=true in "addons-873604"
	I0417 19:10:25.507244  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.507761  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.516591  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0417 19:10:25.512264  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.512313  694161 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 19:10:25.514168  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0417 19:10:25.514240  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.514247  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0417 19:10:25.521625  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0417 19:10:25.522054  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.524967  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0417 19:10:25.525182  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.530440  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.530535  694161 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0417 19:10:25.534095  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0417 19:10:25.544617  694161 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 19:10:25.554427  694161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0417 19:10:25.554687  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0417 19:10:25.556878  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.560046  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0417 19:10:25.564658  694161 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0417 19:10:25.564746  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.573780  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0417 19:10:25.573861  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.593261  694161 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0417 19:10:25.593287  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0417 19:10:25.593350  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.616636  694161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0417 19:10:25.615536  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.615757  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.616995  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0417 19:10:25.626002  694161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0417 19:10:25.627945  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0417 19:10:25.629714  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0417 19:10:25.628421  694161 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0417 19:10:25.634062  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0417 19:10:25.634192  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.647069  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0417 19:10:25.649681  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0417 19:10:25.655101  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0417 19:10:25.657659  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0417 19:10:25.657736  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0417 19:10:25.657834  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.686001  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.690121  694161 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0417 19:10:25.690148  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0417 19:10:25.690210  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.717001  694161 out.go:177]   - Using image docker.io/busybox:stable
	I0417 19:10:25.723299  694161 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0417 19:10:25.725491  694161 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0417 19:10:25.725516  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0417 19:10:25.725707  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.740591  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.772336  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.776824  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.782712  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.806812  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.809023  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.809865  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.828688  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.833988  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.841930  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:26.074562  694161 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0417 19:10:26.074589  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0417 19:10:26.076512  694161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:10:26.185738  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0417 19:10:26.197877  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0417 19:10:26.205301  694161 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0417 19:10:26.205325  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0417 19:10:26.247789  694161 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0417 19:10:26.247864  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0417 19:10:26.293716  694161 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0417 19:10:26.293781  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0417 19:10:26.318344  694161 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0417 19:10:26.318412  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0417 19:10:26.324349  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0417 19:10:26.347358  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 19:10:26.347916  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0417 19:10:26.352561  694161 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0417 19:10:26.352633  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0417 19:10:26.395127  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0417 19:10:26.409508  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0417 19:10:26.431620  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0417 19:10:26.431687  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0417 19:10:26.435088  694161 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0417 19:10:26.435157  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0417 19:10:26.440535  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0417 19:10:26.494929  694161 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0417 19:10:26.495004  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0417 19:10:26.504252  694161 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0417 19:10:26.504321  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0417 19:10:26.518486  694161 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0417 19:10:26.518570  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0417 19:10:26.613502  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0417 19:10:26.613575  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0417 19:10:26.616909  694161 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0417 19:10:26.616987  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0417 19:10:26.700260  694161 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0417 19:10:26.700515  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0417 19:10:26.700494  694161 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0417 19:10:26.700607  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0417 19:10:26.718739  694161 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0417 19:10:26.718808  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0417 19:10:26.753135  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0417 19:10:26.753201  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0417 19:10:26.762415  694161 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0417 19:10:26.762481  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0417 19:10:26.855594  694161 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0417 19:10:26.855659  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0417 19:10:26.888738  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0417 19:10:26.888827  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0417 19:10:26.889018  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0417 19:10:26.926696  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0417 19:10:26.926782  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0417 19:10:26.945001  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0417 19:10:26.958707  694161 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0417 19:10:26.958780  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0417 19:10:26.985547  694161 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0417 19:10:26.985620  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0417 19:10:27.030655  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0417 19:10:27.030733  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0417 19:10:27.033941  694161 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0417 19:10:27.034020  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0417 19:10:27.074058  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0417 19:10:27.145612  694161 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0417 19:10:27.145677  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0417 19:10:27.146006  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0417 19:10:27.146046  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0417 19:10:27.204337  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0417 19:10:27.251630  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0417 19:10:27.251696  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0417 19:10:27.414564  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0417 19:10:27.414644  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0417 19:10:27.544801  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0417 19:10:27.544866  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0417 19:10:27.689791  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0417 19:10:27.689823  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0417 19:10:27.840076  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0417 19:10:28.633453  694161 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.008876091s)
	I0417 19:10:28.633490  694161 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0417 19:10:28.634501  694161 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.557958799s)
	I0417 19:10:28.635196  694161 node_ready.go:35] waiting up to 6m0s for node "addons-873604" to be "Ready" ...
	I0417 19:10:29.166668  694161 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-873604" context rescaled to 1 replicas
	I0417 19:10:29.353382  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.167610217s)
	I0417 19:10:29.683820  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.485861649s)
	I0417 19:10:30.664201  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:31.355545  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.031085839s)
	I0417 19:10:31.356024  694161 addons.go:470] Verifying addon ingress=true in "addons-873604"
	I0417 19:10:31.358110  694161 out.go:177] * Verifying ingress addon...
	I0417 19:10:31.355763  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.008368405s)
	I0417 19:10:31.355813  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.007853621s)
	I0417 19:10:31.355832  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.960683693s)
	I0417 19:10:31.355849  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.946283273s)
	I0417 19:10:31.355888  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.915290989s)
	I0417 19:10:31.355934  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.466882199s)
	I0417 19:10:31.355971  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.410904382s)
	I0417 19:10:31.363011  694161 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-873604 service yakd-dashboard -n yakd-dashboard
	
	I0417 19:10:31.361493  694161 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0417 19:10:31.361662  694161 addons.go:470] Verifying addon registry=true in "addons-873604"
	I0417 19:10:31.361678  694161 addons.go:470] Verifying addon metrics-server=true in "addons-873604"
	I0417 19:10:31.367392  694161 out.go:177] * Verifying registry addon...
	I0417 19:10:31.371128  694161 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0417 19:10:31.393288  694161 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0417 19:10:31.421529  694161 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0417 19:10:31.421558  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:31.422113  694161 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0417 19:10:31.422159  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:31.456565  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.382402382s)
	W0417 19:10:31.456601  694161 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0417 19:10:31.456624  694161 retry.go:31] will retry after 304.18179ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0417 19:10:31.456695  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.252245288s)
	I0417 19:10:31.723649  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.883526582s)
	I0417 19:10:31.723750  694161 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-873604"
	I0417 19:10:31.727409  694161 out.go:177] * Verifying csi-hostpath-driver addon...
	I0417 19:10:31.730837  694161 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0417 19:10:31.761886  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0417 19:10:31.769910  694161 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0417 19:10:31.769937  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:31.870305  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:31.884834  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:32.235388  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:32.369411  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:32.382452  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:32.741173  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:32.868971  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:32.877267  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:33.138965  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:33.235662  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:33.369880  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:33.376814  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:33.747531  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:33.866287  694161 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0417 19:10:33.866365  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:33.881308  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:33.901922  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:33.911544  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:34.090387  694161 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0417 19:10:34.129054  694161 addons.go:234] Setting addon gcp-auth=true in "addons-873604"
	I0417 19:10:34.129106  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:34.129544  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:34.146111  694161 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0417 19:10:34.146166  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:34.184643  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:34.242748  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:34.370505  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:34.375460  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:34.741337  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:34.869880  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:34.875462  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:34.901369  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.139432601s)
	I0417 19:10:34.904202  694161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0417 19:10:34.906842  694161 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0417 19:10:34.909067  694161 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0417 19:10:34.909094  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0417 19:10:34.943154  694161 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0417 19:10:34.943181  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0417 19:10:34.970994  694161 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0417 19:10:34.971028  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0417 19:10:34.994175  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0417 19:10:35.236043  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:35.369854  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:35.376202  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:35.669273  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:35.708575  694161 addons.go:470] Verifying addon gcp-auth=true in "addons-873604"
	I0417 19:10:35.710859  694161 out.go:177] * Verifying gcp-auth addon...
	I0417 19:10:35.714416  694161 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0417 19:10:35.726369  694161 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0417 19:10:35.726402  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:35.737043  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:35.870349  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:35.875860  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:36.219485  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:36.237764  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:36.370223  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:36.378510  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:36.725348  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:36.741277  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:36.870492  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:36.875817  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:37.222461  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:37.235817  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:37.369787  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:37.375672  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:37.717842  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:37.740759  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:37.869888  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:37.876592  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:38.138861  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:38.225001  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:38.238824  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:38.371766  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:38.377432  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:38.718765  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:38.744321  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:38.874671  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:38.880120  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:39.241812  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:39.257757  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:39.371224  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:39.376249  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:39.717996  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:39.738323  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:39.870734  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:39.875623  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:40.139051  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:40.218201  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:40.246904  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:40.369716  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:40.375729  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:40.717954  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:40.736144  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:40.869500  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:40.875173  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:41.218470  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:41.243127  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:41.370231  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:41.375902  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:41.718717  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:41.738493  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:41.869720  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:41.875470  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:42.144611  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:42.219098  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:42.244859  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:42.370993  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:42.376710  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:42.718338  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:42.741738  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:42.869755  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:42.875782  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:43.218545  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:43.235532  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:43.369652  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:43.375983  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:43.718024  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:43.735764  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:43.869481  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:43.875400  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:44.218908  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:44.235079  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:44.369604  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:44.375463  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:44.639044  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:44.717939  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:44.735659  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:44.869819  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:44.875686  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:45.219433  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:45.239557  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:45.371014  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:45.375903  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:45.718534  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:45.736041  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:45.869520  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:45.875728  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:46.217824  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:46.236030  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:46.369776  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:46.375147  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:46.639153  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:46.718101  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:46.736301  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:46.870011  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:46.874957  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:47.218218  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:47.236112  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:47.369860  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:47.375589  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:47.718185  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:47.735878  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:47.869210  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:47.875185  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:48.218659  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:48.235815  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:48.370102  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:48.375387  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:48.717887  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:48.740697  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:48.869112  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:48.874946  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:49.138984  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:49.218420  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:49.237041  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:49.369614  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:49.375446  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:49.718027  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:49.735523  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:49.869621  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:49.875871  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:50.218195  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:50.235394  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:50.368854  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:50.375640  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:50.718051  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:50.742698  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:50.869911  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:50.874802  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:51.141569  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:51.217814  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:51.235260  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:51.369930  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:51.376075  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:51.718403  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:51.735753  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:51.869875  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:51.875881  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:52.218587  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:52.235946  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:52.370301  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:52.374995  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:52.717822  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:52.735306  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:52.869458  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:52.875524  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:53.218284  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:53.234965  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:53.369560  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:53.375331  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:53.638660  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:53.718059  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:53.736916  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:53.869022  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:53.874600  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:54.218310  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:54.236412  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:54.369596  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:54.379677  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:54.718184  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:54.736434  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:54.869969  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:54.874908  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:55.218201  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:55.237492  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:55.371780  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:55.375377  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:55.639952  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:55.718402  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:55.735774  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:55.870137  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:55.876303  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:56.218478  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:56.236205  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:56.369275  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:56.375044  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:56.718322  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:56.736029  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:56.869783  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:56.875502  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:57.218460  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:57.235190  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:57.369537  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:57.375036  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:57.717728  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:57.741519  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:57.869492  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:57.875018  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:58.193982  694161 node_ready.go:49] node "addons-873604" has status "Ready":"True"
	I0417 19:10:58.194009  694161 node_ready.go:38] duration metric: took 29.558788318s for node "addons-873604" to be "Ready" ...
	I0417 19:10:58.194020  694161 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:10:58.219989  694161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tf89r" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:58.230190  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:58.242523  694161 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0417 19:10:58.242551  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:58.453219  694161 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0417 19:10:58.453244  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:58.459654  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:58.720215  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:58.749642  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:58.905939  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:58.910002  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:59.218682  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:59.241589  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:59.371419  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:59.375812  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:59.719430  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:59.750563  694161 pod_ready.go:92] pod "coredns-7db6d8ff4d-tf89r" in "kube-system" namespace has status "Ready":"True"
	I0417 19:10:59.750638  694161 pod_ready.go:81] duration metric: took 1.530606485s for pod "coredns-7db6d8ff4d-tf89r" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.750676  694161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.760738  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:59.772853  694161 pod_ready.go:92] pod "etcd-addons-873604" in "kube-system" namespace has status "Ready":"True"
	I0417 19:10:59.772939  694161 pod_ready.go:81] duration metric: took 22.232651ms for pod "etcd-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.772982  694161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.781220  694161 pod_ready.go:92] pod "kube-apiserver-addons-873604" in "kube-system" namespace has status "Ready":"True"
	I0417 19:10:59.781286  694161 pod_ready.go:81] duration metric: took 8.281225ms for pod "kube-apiserver-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.781314  694161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.790065  694161 pod_ready.go:92] pod "kube-controller-manager-addons-873604" in "kube-system" namespace has status "Ready":"True"
	I0417 19:10:59.790136  694161 pod_ready.go:81] duration metric: took 8.801893ms for pod "kube-controller-manager-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.790167  694161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zcxl8" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.798101  694161 pod_ready.go:92] pod "kube-proxy-zcxl8" in "kube-system" namespace has status "Ready":"True"
	I0417 19:10:59.798168  694161 pod_ready.go:81] duration metric: took 7.981361ms for pod "kube-proxy-zcxl8" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.798195  694161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.870139  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:59.878728  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:00.142401  694161 pod_ready.go:92] pod "kube-scheduler-addons-873604" in "kube-system" namespace has status "Ready":"True"
	I0417 19:11:00.142491  694161 pod_ready.go:81] duration metric: took 344.272717ms for pod "kube-scheduler-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:11:00.142521  694161 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace to be "Ready" ...
	I0417 19:11:00.227796  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:00.238580  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:00.372007  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:00.380066  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:00.722375  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:00.753135  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:00.869386  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:00.876005  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:01.218295  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:01.237169  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:01.370068  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:01.375354  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:01.718041  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:01.737786  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:01.891834  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:01.903192  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:02.148836  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:02.218827  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:02.236968  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:02.371716  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:02.378179  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:02.719972  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:02.736836  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:02.869447  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:02.875945  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:03.218552  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:03.237415  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:03.371858  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:03.377871  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:03.722136  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:03.737853  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:03.872749  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:03.880712  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:04.150468  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:04.218259  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:04.238582  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:04.370947  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:04.376649  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:04.718729  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:04.737561  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:04.869349  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:04.876110  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:05.220261  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:05.236778  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:05.378553  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:05.378768  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:05.718695  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:05.740079  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:05.870137  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:05.875792  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:06.150773  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:06.218462  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:06.238256  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:06.369807  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:06.377426  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:06.718603  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:06.741723  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:06.871607  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:06.879159  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:07.222323  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:07.240218  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:07.376998  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:07.380954  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:07.718172  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:07.738377  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:07.870599  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:07.875837  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:08.218036  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:08.236735  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:08.382356  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:08.389373  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:08.650078  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:08.718935  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:08.739064  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:08.871648  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:08.910720  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:09.218824  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:09.237055  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:09.371332  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:09.377805  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:09.718698  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:09.742366  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:09.869953  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:09.877683  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:10.224551  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:10.249433  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:10.372135  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:10.380415  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:10.719590  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:10.742216  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:10.870793  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:10.877309  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:11.149651  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:11.219175  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:11.238836  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:11.371144  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:11.376428  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:11.720002  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:11.740368  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:11.872181  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:11.888635  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:12.219814  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:12.241282  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:12.370200  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:12.376407  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:12.718721  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:12.740118  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:12.871105  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:12.876231  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:13.218848  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:13.236893  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:13.369915  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:13.376112  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:13.653763  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:13.718924  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:13.737517  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:13.886278  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:13.895271  694161 kapi.go:107] duration metric: took 42.524139256s to wait for kubernetes.io/minikube-addons=registry ...
	I0417 19:11:14.223223  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:14.238887  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:14.371817  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:14.718137  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:14.740228  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:14.870261  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:15.219817  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:15.238023  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:15.371453  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:15.720488  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:15.739108  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:15.872775  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:16.152651  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:16.235876  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:16.254732  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:16.374869  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:16.728724  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:16.771056  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:16.888550  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:17.226920  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:17.239956  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:17.370111  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:17.718273  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:17.748220  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:17.872756  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:18.153206  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:18.220492  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:18.239230  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:18.372003  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:18.721435  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:18.760309  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:18.871903  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:19.218808  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:19.238035  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:19.371667  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:19.718281  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:19.743463  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:19.870338  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:20.219031  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:20.237181  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:20.370688  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:20.654549  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:20.719437  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:20.751637  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:20.870879  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:21.220109  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:21.239047  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:21.370636  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:21.718946  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:21.741844  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:21.870909  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:22.242466  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:22.249024  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:22.370372  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:22.722411  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:22.753462  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:22.869380  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:23.160041  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:23.218705  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:23.237200  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:23.369904  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:23.718518  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:23.737766  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:23.869369  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:24.220204  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:24.238505  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:24.373666  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:24.719587  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:24.737987  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:24.869967  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:25.219125  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:25.239295  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:25.370188  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:25.649597  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:25.717825  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:25.740871  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:25.870722  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:26.218908  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:26.237074  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:26.369559  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:26.718015  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:26.742363  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:26.869507  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:27.219220  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:27.236965  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:27.371839  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:27.653655  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:27.718434  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:27.741889  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:27.869166  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:28.218914  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:28.238449  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:28.370624  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:28.718931  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:28.745678  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:28.870336  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:29.218350  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:29.237827  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:29.369637  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:29.721178  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:29.738533  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:29.871052  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:30.154212  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:30.219280  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:30.236493  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:30.374199  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:30.718520  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:30.744719  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:30.870260  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:31.219234  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:31.236794  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:31.371048  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:31.718349  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:31.738516  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:31.870446  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:32.220114  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:32.239046  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:32.371459  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:32.649375  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:32.720129  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:32.749330  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:32.869470  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:33.218994  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:33.236895  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:33.376533  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:33.721459  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:33.750444  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:33.870800  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:34.218241  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:34.236673  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:34.371325  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:34.649682  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:34.719428  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:34.741178  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:34.869184  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:35.218402  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:35.236244  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:35.369472  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:35.719798  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:35.750522  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:35.870451  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:36.218924  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:36.237504  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:36.373065  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:36.718683  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:36.742382  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:36.870620  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:37.150017  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:37.218810  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:37.236196  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:37.369180  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:37.718066  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:37.741745  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:37.869300  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:38.218084  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:38.237553  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:38.370416  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:38.718226  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:38.737085  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:38.870884  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:39.151042  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:39.218633  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:39.236507  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:39.371347  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:39.718326  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:39.738235  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:39.869307  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:40.218803  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:40.237001  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:40.369375  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:40.718725  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:40.742341  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:40.869892  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:41.218119  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:41.240500  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:41.373725  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:41.650417  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:41.717699  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:41.737890  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:41.869495  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:42.226067  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:42.237543  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:42.374556  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:42.718976  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:42.745030  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:42.871186  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:43.219028  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:43.238084  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:43.370843  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:43.651813  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:43.718101  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:43.751490  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:43.871234  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:44.218810  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:44.236551  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:44.371141  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:44.718270  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:44.737739  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:44.870636  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:45.223033  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:45.246938  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:45.386761  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:45.718293  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:45.751686  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:45.870583  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:46.148729  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:46.218333  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:46.236675  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:46.370135  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:46.726332  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:46.743102  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:46.874631  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:47.219761  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:47.236920  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:47.371118  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:47.718738  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:47.752629  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:47.870320  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:48.149702  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:48.218199  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:48.237288  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:48.371836  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:48.724142  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:48.746300  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:48.869645  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:49.218612  694161 kapi.go:107] duration metric: took 1m13.504195307s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0417 19:11:49.220844  694161 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-873604 cluster.
	I0417 19:11:49.222830  694161 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0417 19:11:49.225223  694161 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0417 19:11:49.247040  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:49.370141  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:49.745956  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:49.869404  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:50.152375  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:50.246453  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:50.370212  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:50.744501  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:50.871108  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:51.237122  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:51.370542  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:51.739926  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:51.870413  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:52.237932  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:52.372597  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:52.650827  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:52.752841  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:52.869858  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:53.238097  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:53.369493  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:53.738440  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:53.869877  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:54.238327  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:54.378629  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:54.654021  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:54.742588  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:54.871247  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:55.236375  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:55.370578  694161 kapi.go:107] duration metric: took 1m24.009088193s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0417 19:11:55.756441  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:56.237089  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:56.738219  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:57.152018  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:57.236811  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:57.747232  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:58.237468  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:58.744977  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:59.236482  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:59.648673  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:59.738136  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:00.317239  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:00.745916  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:01.239124  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:01.649699  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:01.746095  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:02.236889  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:02.741611  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:03.239120  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:03.651329  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:03.745174  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:04.237053  694161 kapi.go:107] duration metric: took 1m32.50621535s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0417 19:12:04.240443  694161 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0417 19:12:04.242331  694161 addons.go:505] duration metric: took 1m38.971457593s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0417 19:12:06.149840  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:08.648808  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:10.649204  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:13.148836  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:15.149714  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:17.150184  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:19.648638  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:21.649539  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:24.149824  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:26.155030  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:28.149378  694161 pod_ready.go:92] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"True"
	I0417 19:12:28.149404  694161 pod_ready.go:81] duration metric: took 1m28.006859378s for pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace to be "Ready" ...
	I0417 19:12:28.149417  694161 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6lc6l" in "kube-system" namespace to be "Ready" ...
	I0417 19:12:28.154641  694161 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-6lc6l" in "kube-system" namespace has status "Ready":"True"
	I0417 19:12:28.154665  694161 pod_ready.go:81] duration metric: took 5.240053ms for pod "nvidia-device-plugin-daemonset-6lc6l" in "kube-system" namespace to be "Ready" ...
	I0417 19:12:28.154717  694161 pod_ready.go:38] duration metric: took 1m29.960629996s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:12:28.154738  694161 api_server.go:52] waiting for apiserver process to appear ...
	I0417 19:12:28.154789  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0417 19:12:28.154864  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0417 19:12:28.215121  694161 cri.go:89] found id: "e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:28.215144  694161 cri.go:89] found id: ""
	I0417 19:12:28.215160  694161 logs.go:276] 1 containers: [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211]
	I0417 19:12:28.215222  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.219199  694161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0417 19:12:28.219271  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0417 19:12:28.257376  694161 cri.go:89] found id: "601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:28.257396  694161 cri.go:89] found id: ""
	I0417 19:12:28.257404  694161 logs.go:276] 1 containers: [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1]
	I0417 19:12:28.257462  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.260955  694161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0417 19:12:28.261031  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0417 19:12:28.306001  694161 cri.go:89] found id: "07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:28.306027  694161 cri.go:89] found id: ""
	I0417 19:12:28.306035  694161 logs.go:276] 1 containers: [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311]
	I0417 19:12:28.306115  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.309905  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0417 19:12:28.310025  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0417 19:12:28.351814  694161 cri.go:89] found id: "e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:28.351842  694161 cri.go:89] found id: ""
	I0417 19:12:28.351850  694161 logs.go:276] 1 containers: [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76]
	I0417 19:12:28.351914  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.355525  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0417 19:12:28.355608  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0417 19:12:28.414372  694161 cri.go:89] found id: "86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:28.414393  694161 cri.go:89] found id: ""
	I0417 19:12:28.414402  694161 logs.go:276] 1 containers: [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17]
	I0417 19:12:28.414459  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.418031  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0417 19:12:28.418104  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0417 19:12:28.456737  694161 cri.go:89] found id: "97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:28.456815  694161 cri.go:89] found id: ""
	I0417 19:12:28.456836  694161 logs.go:276] 1 containers: [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458]
	I0417 19:12:28.456938  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.460493  694161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0417 19:12:28.460561  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0417 19:12:28.505567  694161 cri.go:89] found id: "fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:28.505598  694161 cri.go:89] found id: ""
	I0417 19:12:28.505606  694161 logs.go:276] 1 containers: [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6]
	I0417 19:12:28.505663  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.509220  694161 logs.go:123] Gathering logs for etcd [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1] ...
	I0417 19:12:28.509245  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:28.562732  694161 logs.go:123] Gathering logs for coredns [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311] ...
	I0417 19:12:28.562804  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:28.606981  694161 logs.go:123] Gathering logs for kube-scheduler [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76] ...
	I0417 19:12:28.607017  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:28.656922  694161 logs.go:123] Gathering logs for container status ...
	I0417 19:12:28.656955  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0417 19:12:28.706497  694161 logs.go:123] Gathering logs for kubelet ...
	I0417 19:12:28.706528  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0417 19:12:28.765045  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985411    1495 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.765268  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985465    1495 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.765449  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985517    1495 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.765649  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985530    1495 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.765813  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985589    1495 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.765994  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.766177  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.766382  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.766565  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.766772  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:28.804099  694161 logs.go:123] Gathering logs for describe nodes ...
	I0417 19:12:28.805095  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0417 19:12:28.981173  694161 logs.go:123] Gathering logs for kube-apiserver [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211] ...
	I0417 19:12:28.981206  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:29.069695  694161 logs.go:123] Gathering logs for kube-proxy [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17] ...
	I0417 19:12:29.069737  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:29.108940  694161 logs.go:123] Gathering logs for kube-controller-manager [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458] ...
	I0417 19:12:29.108969  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:29.194463  694161 logs.go:123] Gathering logs for kindnet [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6] ...
	I0417 19:12:29.194498  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:29.233703  694161 logs.go:123] Gathering logs for CRI-O ...
	I0417 19:12:29.233793  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0417 19:12:29.331263  694161 logs.go:123] Gathering logs for dmesg ...
	I0417 19:12:29.331303  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0417 19:12:29.350417  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:29.350446  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0417 19:12:29.350495  694161 out.go:239] X Problems detected in kubelet:
	W0417 19:12:29.350510  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:29.350522  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:29.350534  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:29.350542  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:29.350552  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:29.350559  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:29.350570  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:12:39.351331  694161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:12:39.368434  694161 api_server.go:72] duration metric: took 2m14.097783195s to wait for apiserver process to appear ...
	I0417 19:12:39.368458  694161 api_server.go:88] waiting for apiserver healthz status ...
	I0417 19:12:39.368492  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0417 19:12:39.368560  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0417 19:12:39.409853  694161 cri.go:89] found id: "e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:39.409873  694161 cri.go:89] found id: ""
	I0417 19:12:39.409881  694161 logs.go:276] 1 containers: [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211]
	I0417 19:12:39.409937  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.413569  694161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0417 19:12:39.413643  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0417 19:12:39.452693  694161 cri.go:89] found id: "601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:39.452717  694161 cri.go:89] found id: ""
	I0417 19:12:39.452725  694161 logs.go:276] 1 containers: [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1]
	I0417 19:12:39.452779  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.456270  694161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0417 19:12:39.456343  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0417 19:12:39.499495  694161 cri.go:89] found id: "07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:39.499516  694161 cri.go:89] found id: ""
	I0417 19:12:39.499524  694161 logs.go:276] 1 containers: [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311]
	I0417 19:12:39.499579  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.504195  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0417 19:12:39.504264  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0417 19:12:39.545852  694161 cri.go:89] found id: "e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:39.545875  694161 cri.go:89] found id: ""
	I0417 19:12:39.545883  694161 logs.go:276] 1 containers: [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76]
	I0417 19:12:39.545943  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.549688  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0417 19:12:39.549763  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0417 19:12:39.591672  694161 cri.go:89] found id: "86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:39.591695  694161 cri.go:89] found id: ""
	I0417 19:12:39.591703  694161 logs.go:276] 1 containers: [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17]
	I0417 19:12:39.591760  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.595500  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0417 19:12:39.595585  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0417 19:12:39.633385  694161 cri.go:89] found id: "97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:39.633407  694161 cri.go:89] found id: ""
	I0417 19:12:39.633415  694161 logs.go:276] 1 containers: [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458]
	I0417 19:12:39.633471  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.637028  694161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0417 19:12:39.637104  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0417 19:12:39.676483  694161 cri.go:89] found id: "fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:39.676565  694161 cri.go:89] found id: ""
	I0417 19:12:39.676581  694161 logs.go:276] 1 containers: [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6]
	I0417 19:12:39.676640  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.680313  694161 logs.go:123] Gathering logs for kubelet ...
	I0417 19:12:39.680340  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0417 19:12:39.735678  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985411    1495 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.735909  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985465    1495 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.736092  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985517    1495 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.736291  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985530    1495 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.736476  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985589    1495 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.736681  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.736876  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.737081  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.737266  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.737472  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:39.775926  694161 logs.go:123] Gathering logs for describe nodes ...
	I0417 19:12:39.775958  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0417 19:12:39.911859  694161 logs.go:123] Gathering logs for coredns [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311] ...
	I0417 19:12:39.911891  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:39.956547  694161 logs.go:123] Gathering logs for kube-controller-manager [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458] ...
	I0417 19:12:39.956577  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:40.025096  694161 logs.go:123] Gathering logs for container status ...
	I0417 19:12:40.025144  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0417 19:12:40.105644  694161 logs.go:123] Gathering logs for CRI-O ...
	I0417 19:12:40.105682  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0417 19:12:40.210702  694161 logs.go:123] Gathering logs for dmesg ...
	I0417 19:12:40.210745  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0417 19:12:40.230875  694161 logs.go:123] Gathering logs for kube-apiserver [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211] ...
	I0417 19:12:40.230912  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:40.301628  694161 logs.go:123] Gathering logs for etcd [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1] ...
	I0417 19:12:40.301660  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:40.361420  694161 logs.go:123] Gathering logs for kube-scheduler [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76] ...
	I0417 19:12:40.361462  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:40.411255  694161 logs.go:123] Gathering logs for kube-proxy [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17] ...
	I0417 19:12:40.411289  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:40.451586  694161 logs.go:123] Gathering logs for kindnet [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6] ...
	I0417 19:12:40.451617  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:40.494642  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:40.494676  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0417 19:12:40.494782  694161 out.go:239] X Problems detected in kubelet:
	W0417 19:12:40.494827  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:40.494855  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:40.494863  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:40.494870  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:40.494881  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:40.494887  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:40.494893  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:12:50.496684  694161 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0417 19:12:50.504299  694161 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0417 19:12:50.505439  694161 api_server.go:141] control plane version: v1.30.0-rc.2
	I0417 19:12:50.505468  694161 api_server.go:131] duration metric: took 11.136999618s to wait for apiserver health ...
	I0417 19:12:50.505478  694161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0417 19:12:50.505499  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0417 19:12:50.505560  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0417 19:12:50.551940  694161 cri.go:89] found id: "e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:50.551964  694161 cri.go:89] found id: ""
	I0417 19:12:50.551972  694161 logs.go:276] 1 containers: [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211]
	I0417 19:12:50.552042  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.555833  694161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0417 19:12:50.555936  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0417 19:12:50.598297  694161 cri.go:89] found id: "601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:50.598320  694161 cri.go:89] found id: ""
	I0417 19:12:50.598328  694161 logs.go:276] 1 containers: [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1]
	I0417 19:12:50.598391  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.602101  694161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0417 19:12:50.602175  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0417 19:12:50.642793  694161 cri.go:89] found id: "07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:50.642813  694161 cri.go:89] found id: ""
	I0417 19:12:50.642821  694161 logs.go:276] 1 containers: [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311]
	I0417 19:12:50.642875  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.646409  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0417 19:12:50.646502  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0417 19:12:50.688600  694161 cri.go:89] found id: "e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:50.688621  694161 cri.go:89] found id: ""
	I0417 19:12:50.688629  694161 logs.go:276] 1 containers: [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76]
	I0417 19:12:50.688704  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.692293  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0417 19:12:50.692374  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0417 19:12:50.734228  694161 cri.go:89] found id: "86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:50.734253  694161 cri.go:89] found id: ""
	I0417 19:12:50.734261  694161 logs.go:276] 1 containers: [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17]
	I0417 19:12:50.734351  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.743726  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0417 19:12:50.743815  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0417 19:12:50.786457  694161 cri.go:89] found id: "97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:50.786480  694161 cri.go:89] found id: ""
	I0417 19:12:50.786487  694161 logs.go:276] 1 containers: [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458]
	I0417 19:12:50.786572  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.790301  694161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0417 19:12:50.790394  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0417 19:12:50.840094  694161 cri.go:89] found id: "fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:50.840169  694161 cri.go:89] found id: ""
	I0417 19:12:50.840192  694161 logs.go:276] 1 containers: [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6]
	I0417 19:12:50.840272  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.844437  694161 logs.go:123] Gathering logs for container status ...
	I0417 19:12:50.844508  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0417 19:12:50.907645  694161 logs.go:123] Gathering logs for etcd [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1] ...
	I0417 19:12:50.907675  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:50.958811  694161 logs.go:123] Gathering logs for coredns [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311] ...
	I0417 19:12:50.958845  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:51.014483  694161 logs.go:123] Gathering logs for kube-scheduler [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76] ...
	I0417 19:12:51.014523  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:51.069807  694161 logs.go:123] Gathering logs for kube-proxy [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17] ...
	I0417 19:12:51.069839  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:51.120036  694161 logs.go:123] Gathering logs for kube-controller-manager [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458] ...
	I0417 19:12:51.120068  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:51.196076  694161 logs.go:123] Gathering logs for kindnet [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6] ...
	I0417 19:12:51.196112  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:51.239098  694161 logs.go:123] Gathering logs for kubelet ...
	I0417 19:12:51.239130  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0417 19:12:51.296063  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985411    1495 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.296320  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985465    1495 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.296511  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985517    1495 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.296716  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985530    1495 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.296888  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985589    1495 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.297072  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.297261  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.297468  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.297656  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.297865  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:51.337800  694161 logs.go:123] Gathering logs for dmesg ...
	I0417 19:12:51.337832  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0417 19:12:51.356602  694161 logs.go:123] Gathering logs for describe nodes ...
	I0417 19:12:51.356634  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0417 19:12:51.495718  694161 logs.go:123] Gathering logs for kube-apiserver [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211] ...
	I0417 19:12:51.495749  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:51.563465  694161 logs.go:123] Gathering logs for CRI-O ...
	I0417 19:12:51.563540  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0417 19:12:51.657423  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:51.657454  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0417 19:12:51.657514  694161 out.go:239] X Problems detected in kubelet:
	W0417 19:12:51.657528  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.657536  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.657547  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.657555  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.657565  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:51.657572  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:51.657583  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:13:01.671954  694161 system_pods.go:59] 18 kube-system pods found
	I0417 19:13:01.672013  694161 system_pods.go:61] "coredns-7db6d8ff4d-tf89r" [6d50a17c-d030-491f-a9e7-344e52ca2e43] Running
	I0417 19:13:01.672028  694161 system_pods.go:61] "csi-hostpath-attacher-0" [5ed8350d-6f82-4a99-81fb-acce4f44903e] Running
	I0417 19:13:01.672032  694161 system_pods.go:61] "csi-hostpath-resizer-0" [a094ae73-5cba-4d9f-8f80-f92b7b371c55] Running
	I0417 19:13:01.672037  694161 system_pods.go:61] "csi-hostpathplugin-28wcl" [9513b917-9c97-4e7d-a58c-68fcdb52eadc] Running
	I0417 19:13:01.672041  694161 system_pods.go:61] "etcd-addons-873604" [4d8bd5d0-ff8e-46c2-95b2-370af1fdf8ee] Running
	I0417 19:13:01.672052  694161 system_pods.go:61] "kindnet-xrsgr" [c915c17a-d1ae-404f-a25a-93e517bf7ff9] Running
	I0417 19:13:01.672057  694161 system_pods.go:61] "kube-apiserver-addons-873604" [5d12b02b-b639-4306-b01e-621e0adff821] Running
	I0417 19:13:01.672073  694161 system_pods.go:61] "kube-controller-manager-addons-873604" [8c8ebc95-425b-4347-81b1-9a01e1e106e7] Running
	I0417 19:13:01.672082  694161 system_pods.go:61] "kube-ingress-dns-minikube" [b4ebfb39-7e93-4561-9442-16bc8af64c70] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0417 19:13:01.672091  694161 system_pods.go:61] "kube-proxy-zcxl8" [ee652fb9-719b-460e-becc-f9c35909409c] Running
	I0417 19:13:01.672097  694161 system_pods.go:61] "kube-scheduler-addons-873604" [1493bd44-030a-408f-b7d8-6f60ae22987d] Running
	I0417 19:13:01.672102  694161 system_pods.go:61] "metrics-server-c59844bb4-q7zp5" [da8a5501-6baf-4977-905c-f81fe98110e2] Running
	I0417 19:13:01.672113  694161 system_pods.go:61] "nvidia-device-plugin-daemonset-6lc6l" [88c0cead-b0d0-4699-b183-dab722233906] Running
	I0417 19:13:01.672117  694161 system_pods.go:61] "registry-hlj26" [bd1989fe-0b5a-41a4-ae03-88af2d34eb0d] Running
	I0417 19:13:01.672121  694161 system_pods.go:61] "registry-proxy-qwqgq" [e06390fb-d1dc-4627-80a9-02edada26c01] Running
	I0417 19:13:01.672125  694161 system_pods.go:61] "snapshot-controller-745499f584-4wzw2" [4eb7bf86-05b3-4e06-83e6-05e94dd20f58] Running
	I0417 19:13:01.672129  694161 system_pods.go:61] "snapshot-controller-745499f584-j78nn" [84496916-d4e7-4a9b-b8e3-dce36db8163d] Running
	I0417 19:13:01.672136  694161 system_pods.go:61] "storage-provisioner" [71a1577e-c751-48be-b51e-ae0981fefa0b] Running
	I0417 19:13:01.672142  694161 system_pods.go:74] duration metric: took 11.166658539s to wait for pod list to return data ...
	I0417 19:13:01.672154  694161 default_sa.go:34] waiting for default service account to be created ...
	I0417 19:13:01.674721  694161 default_sa.go:45] found service account: "default"
	I0417 19:13:01.674749  694161 default_sa.go:55] duration metric: took 2.588232ms for default service account to be created ...
	I0417 19:13:01.674760  694161 system_pods.go:116] waiting for k8s-apps to be running ...
	I0417 19:13:01.684828  694161 system_pods.go:86] 18 kube-system pods found
	I0417 19:13:01.684864  694161 system_pods.go:89] "coredns-7db6d8ff4d-tf89r" [6d50a17c-d030-491f-a9e7-344e52ca2e43] Running
	I0417 19:13:01.684871  694161 system_pods.go:89] "csi-hostpath-attacher-0" [5ed8350d-6f82-4a99-81fb-acce4f44903e] Running
	I0417 19:13:01.684876  694161 system_pods.go:89] "csi-hostpath-resizer-0" [a094ae73-5cba-4d9f-8f80-f92b7b371c55] Running
	I0417 19:13:01.684881  694161 system_pods.go:89] "csi-hostpathplugin-28wcl" [9513b917-9c97-4e7d-a58c-68fcdb52eadc] Running
	I0417 19:13:01.684886  694161 system_pods.go:89] "etcd-addons-873604" [4d8bd5d0-ff8e-46c2-95b2-370af1fdf8ee] Running
	I0417 19:13:01.684891  694161 system_pods.go:89] "kindnet-xrsgr" [c915c17a-d1ae-404f-a25a-93e517bf7ff9] Running
	I0417 19:13:01.684896  694161 system_pods.go:89] "kube-apiserver-addons-873604" [5d12b02b-b639-4306-b01e-621e0adff821] Running
	I0417 19:13:01.684901  694161 system_pods.go:89] "kube-controller-manager-addons-873604" [8c8ebc95-425b-4347-81b1-9a01e1e106e7] Running
	I0417 19:13:01.684909  694161 system_pods.go:89] "kube-ingress-dns-minikube" [b4ebfb39-7e93-4561-9442-16bc8af64c70] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0417 19:13:01.684921  694161 system_pods.go:89] "kube-proxy-zcxl8" [ee652fb9-719b-460e-becc-f9c35909409c] Running
	I0417 19:13:01.684930  694161 system_pods.go:89] "kube-scheduler-addons-873604" [1493bd44-030a-408f-b7d8-6f60ae22987d] Running
	I0417 19:13:01.684934  694161 system_pods.go:89] "metrics-server-c59844bb4-q7zp5" [da8a5501-6baf-4977-905c-f81fe98110e2] Running
	I0417 19:13:01.684939  694161 system_pods.go:89] "nvidia-device-plugin-daemonset-6lc6l" [88c0cead-b0d0-4699-b183-dab722233906] Running
	I0417 19:13:01.684947  694161 system_pods.go:89] "registry-hlj26" [bd1989fe-0b5a-41a4-ae03-88af2d34eb0d] Running
	I0417 19:13:01.684951  694161 system_pods.go:89] "registry-proxy-qwqgq" [e06390fb-d1dc-4627-80a9-02edada26c01] Running
	I0417 19:13:01.684954  694161 system_pods.go:89] "snapshot-controller-745499f584-4wzw2" [4eb7bf86-05b3-4e06-83e6-05e94dd20f58] Running
	I0417 19:13:01.684959  694161 system_pods.go:89] "snapshot-controller-745499f584-j78nn" [84496916-d4e7-4a9b-b8e3-dce36db8163d] Running
	I0417 19:13:01.684965  694161 system_pods.go:89] "storage-provisioner" [71a1577e-c751-48be-b51e-ae0981fefa0b] Running
	I0417 19:13:01.684974  694161 system_pods.go:126] duration metric: took 10.207794ms to wait for k8s-apps to be running ...
	I0417 19:13:01.684985  694161 system_svc.go:44] waiting for kubelet service to be running ....
	I0417 19:13:01.685047  694161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:13:01.697447  694161 system_svc.go:56] duration metric: took 12.451388ms WaitForService to wait for kubelet
	I0417 19:13:01.697477  694161 kubeadm.go:576] duration metric: took 2m36.426831573s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:13:01.697496  694161 node_conditions.go:102] verifying NodePressure condition ...
	I0417 19:13:01.700854  694161 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0417 19:13:01.700889  694161 node_conditions.go:123] node cpu capacity is 2
	I0417 19:13:01.700902  694161 node_conditions.go:105] duration metric: took 3.399502ms to run NodePressure ...
	I0417 19:13:01.700915  694161 start.go:240] waiting for startup goroutines ...
	I0417 19:13:01.700923  694161 start.go:245] waiting for cluster config update ...
	I0417 19:13:01.700940  694161 start.go:254] writing updated cluster config ...
	I0417 19:13:01.701278  694161 ssh_runner.go:195] Run: rm -f paused
	I0417 19:13:01.923739  694161 start.go:600] kubectl: 1.29.4, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0417 19:13:01.926029  694161 out.go:177] * Done! kubectl is now configured to use "addons-873604" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 17 19:17:11 addons-873604 crio[912]: time="2024-04-17 19:17:11.436266784Z" level=info msg="Removing pod sandbox: feae8b0fbbea794d2911c0ed296f13d3e4e83f4ef1795bd7827c6f828185733a" id=38e6f98d-9d93-4514-a235-2e11c8f50862 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 17 19:17:11 addons-873604 crio[912]: time="2024-04-17 19:17:11.445784947Z" level=info msg="Removed pod sandbox: feae8b0fbbea794d2911c0ed296f13d3e4e83f4ef1795bd7827c6f828185733a" id=38e6f98d-9d93-4514-a235-2e11c8f50862 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 17 19:17:11 addons-873604 crio[912]: time="2024-04-17 19:17:11.446270055Z" level=info msg="Stopping pod sandbox: 4cda26947eb1061464b5a93d48a030908065e033b38d7e057fdd1e5d1b3610a8" id=66c56edd-704f-45bb-bace-56e8c7cfd188 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 17 19:17:11 addons-873604 crio[912]: time="2024-04-17 19:17:11.446305319Z" level=info msg="Stopped pod sandbox (already stopped): 4cda26947eb1061464b5a93d48a030908065e033b38d7e057fdd1e5d1b3610a8" id=66c56edd-704f-45bb-bace-56e8c7cfd188 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 17 19:17:11 addons-873604 crio[912]: time="2024-04-17 19:17:11.446693174Z" level=info msg="Removing pod sandbox: 4cda26947eb1061464b5a93d48a030908065e033b38d7e057fdd1e5d1b3610a8" id=bf05d4e0-ed85-49c8-8b50-5e18008c56ac name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 17 19:17:11 addons-873604 crio[912]: time="2024-04-17 19:17:11.455960234Z" level=info msg="Removed pod sandbox: 4cda26947eb1061464b5a93d48a030908065e033b38d7e057fdd1e5d1b3610a8" id=bf05d4e0-ed85-49c8-8b50-5e18008c56ac name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 17 19:17:11 addons-873604 crio[912]: time="2024-04-17 19:17:11.456565191Z" level=info msg="Stopping pod sandbox: 544704fa63aaccdb7d6b7dc36546e2e11cf03d0fece2bf4b7596c14ffa87abb3" id=08a4a86d-ba9a-4d9d-971e-e9dfe2e72c34 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 17 19:17:11 addons-873604 crio[912]: time="2024-04-17 19:17:11.456610474Z" level=info msg="Stopped pod sandbox (already stopped): 544704fa63aaccdb7d6b7dc36546e2e11cf03d0fece2bf4b7596c14ffa87abb3" id=08a4a86d-ba9a-4d9d-971e-e9dfe2e72c34 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 17 19:17:11 addons-873604 crio[912]: time="2024-04-17 19:17:11.456925609Z" level=info msg="Removing pod sandbox: 544704fa63aaccdb7d6b7dc36546e2e11cf03d0fece2bf4b7596c14ffa87abb3" id=dec208a5-4cec-4a5c-b414-bbbc9c774433 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 17 19:17:11 addons-873604 crio[912]: time="2024-04-17 19:17:11.466222682Z" level=info msg="Removed pod sandbox: 544704fa63aaccdb7d6b7dc36546e2e11cf03d0fece2bf4b7596c14ffa87abb3" id=dec208a5-4cec-4a5c-b414-bbbc9c774433 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 17 19:17:11 addons-873604 crio[912]: time="2024-04-17 19:17:11.532888122Z" level=info msg="Stopping container: 0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6 (timeout: 2s)" id=2a41d494-0684-44ba-b92d-377660ca0897 name=/runtime.v1.RuntimeService/StopContainer
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.539325733Z" level=warning msg="Stopping container 0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=2a41d494-0684-44ba-b92d-377660ca0897 name=/runtime.v1.RuntimeService/StopContainer
	Apr 17 19:17:13 addons-873604 conmon[4804]: conmon 0814cf0bcc31b96bf5a1 <ninfo>: container 4815 exited with status 137
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.681655849Z" level=info msg="Stopped container 0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6: ingress-nginx/ingress-nginx-controller-84df5799c-6b27j/controller" id=2a41d494-0684-44ba-b92d-377660ca0897 name=/runtime.v1.RuntimeService/StopContainer
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.682191803Z" level=info msg="Stopping pod sandbox: e4de6222494bf4915923c401b99d2fbba8770d3c996905e01d900c130d5da8e5" id=8da4ad2a-3fe7-41f9-8cd3-fdf2d218cb32 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.685612542Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-UJSVHWIOQWFFKGCD - [0:0]\n:KUBE-HP-BQBRF4PBACNLZMN2 - [0:0]\n-X KUBE-HP-BQBRF4PBACNLZMN2\n-X KUBE-HP-UJSVHWIOQWFFKGCD\nCOMMIT\n"
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.687018661Z" level=info msg="Closing host port tcp:80"
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.687068957Z" level=info msg="Closing host port tcp:443"
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.688458002Z" level=info msg="Host port tcp:80 does not have an open socket"
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.688485201Z" level=info msg="Host port tcp:443 does not have an open socket"
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.688654542Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-84df5799c-6b27j Namespace:ingress-nginx ID:e4de6222494bf4915923c401b99d2fbba8770d3c996905e01d900c130d5da8e5 UID:f7789c3a-6583-4932-9fcd-b05c5ebdd7fb NetNS:/var/run/netns/11b9e9a8-008a-45a0-b558-60b902f270e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.688797784Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-84df5799c-6b27j from CNI network \"kindnet\" (type=ptp)"
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.708940634Z" level=info msg="Stopped pod sandbox: e4de6222494bf4915923c401b99d2fbba8770d3c996905e01d900c130d5da8e5" id=8da4ad2a-3fe7-41f9-8cd3-fdf2d218cb32 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.819450078Z" level=info msg="Removing container: 0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6" id=1c7c4ed9-93e8-429f-8921-36584f44bef7 name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 17 19:17:13 addons-873604 crio[912]: time="2024-04-17 19:17:13.834395215Z" level=info msg="Removed container 0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6: ingress-nginx/ingress-nginx-controller-84df5799c-6b27j/controller" id=1c7c4ed9-93e8-429f-8921-36584f44bef7 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ad98cabfcf2ba       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                        7 seconds ago       Exited              hello-world-app           2                   95ab8090e2046       hello-world-app-86c47465fc-dpqqm
	5768e9026a041       docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                         2 minutes ago       Running             nginx                     0                   40ee22d252486       nginx
	7c983645ca770       ghcr.io/headlamp-k8s/headlamp@sha256:1f277f42730106526a27560517a4c5f9253ccb2477be458986f44a791158a02c                   3 minutes ago       Running             headlamp                  0                   3035e4d6159cf       headlamp-7559bf459f-4rl7c
	f1ee6f9af7955       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            5 minutes ago       Running             gcp-auth                  0                   a1051241b01f6       gcp-auth-5db96cd9b4-j8trl
	bdac711194bbe       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         5 minutes ago       Running             yakd                      0                   f8052eddb7691       yakd-dashboard-5ddbf7d777-shv8z
	97e99410da22d       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   6 minutes ago       Running             metrics-server            0                   363043df0486c       metrics-server-c59844bb4-q7zp5
	348b13f6f5fc0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        6 minutes ago       Running             storage-provisioner       0                   9533e27c6447d       storage-provisioner
	07b8ad1d41f6b       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        6 minutes ago       Running             coredns                   0                   60dc8e785aa14       coredns-7db6d8ff4d-tf89r
	fcb960be1e4e3       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                        6 minutes ago       Running             kindnet-cni               0                   95e574383062b       kindnet-xrsgr
	86f101ac5b7e9       aa30953d3c2b4acff6d925faf6c4af0ac0577bf606ddf8491ab14ca0cabba691                                                        6 minutes ago       Running             kube-proxy                0                   d23200fdc7c43       kube-proxy-zcxl8
	e9f56dc186c7a       425022910de1d4ab7b21888dfad9e8f9da04f37712dccd64347bbfd735b80657                                                        7 minutes ago       Running             kube-scheduler            0                   12c490b1344f4       kube-scheduler-addons-873604
	97206e2d817c0       88320cfaf308b507d1d1d6fa062612281320e1ca1add79c7b22b5b0a19756aa1                                                        7 minutes ago       Running             kube-controller-manager   0                   a915e5f0093c1       kube-controller-manager-addons-873604
	e7fa33d45e130       78b24de5c18c446278f50432f209bd786ff0d05a4d09b222d1f17998ae2ce121                                                        7 minutes ago       Running             kube-apiserver            0                   e7e7213a05b6e       kube-apiserver-addons-873604
	601178ae2a7a1       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        7 minutes ago       Running             etcd                      0                   67ea6d389528c       etcd-addons-873604
	
	
	==> coredns [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311] <==
	[INFO] 10.244.0.20:41624 - 16372 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056605s
	[INFO] 10.244.0.20:42073 - 9120 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001999449s
	[INFO] 10.244.0.20:41624 - 22779 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067461s
	[INFO] 10.244.0.20:42073 - 36836 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000238788s
	[INFO] 10.244.0.20:41624 - 62401 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001198673s
	[INFO] 10.244.0.20:41624 - 35033 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002058837s
	[INFO] 10.244.0.20:41624 - 8543 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000115607s
	[INFO] 10.244.0.20:42043 - 14800 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00011349s
	[INFO] 10.244.0.20:42043 - 13218 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000102479s
	[INFO] 10.244.0.20:34666 - 46433 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066804s
	[INFO] 10.244.0.20:42043 - 32959 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000153826s
	[INFO] 10.244.0.20:34666 - 4966 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051461s
	[INFO] 10.244.0.20:42043 - 25755 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067264s
	[INFO] 10.244.0.20:34666 - 757 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070907s
	[INFO] 10.244.0.20:42043 - 17547 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063678s
	[INFO] 10.244.0.20:34666 - 55665 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003986s
	[INFO] 10.244.0.20:42043 - 61124 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039745s
	[INFO] 10.244.0.20:34666 - 44822 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040105s
	[INFO] 10.244.0.20:34666 - 7842 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00016999s
	[INFO] 10.244.0.20:42043 - 46459 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001286237s
	[INFO] 10.244.0.20:34666 - 12996 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001534148s
	[INFO] 10.244.0.20:42043 - 46881 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001279779s
	[INFO] 10.244.0.20:42043 - 729 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000212934s
	[INFO] 10.244.0.20:34666 - 38077 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00113648s
	[INFO] 10.244.0.20:34666 - 8603 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074541s
	
	
	==> describe nodes <==
	Name:               addons-873604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-873604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=addons-873604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T19_10_11_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-873604
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 19:10:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-873604
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:17:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:14:45 +0000   Wed, 17 Apr 2024 19:10:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:14:45 +0000   Wed, 17 Apr 2024 19:10:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:14:45 +0000   Wed, 17 Apr 2024 19:10:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:14:45 +0000   Wed, 17 Apr 2024 19:10:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-873604
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f283e0278094667a9e13c23300099a6
	  System UUID:                dc0c30cc-5a3b-4082-8b79-86e7972a9cc9
	  Boot ID:                    ab21f790-14ed-4d12-b82f-2c18616b58d7
	  Kernel Version:             5.15.0-1057-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-dpqqm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  gcp-auth                    gcp-auth-5db96cd9b4-j8trl                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  headlamp                    headlamp-7559bf459f-4rl7c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 coredns-7db6d8ff4d-tf89r                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m55s
	  kube-system                 etcd-addons-873604                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m8s
	  kube-system                 kindnet-xrsgr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m54s
	  kube-system                 kube-apiserver-addons-873604             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kube-controller-manager-addons-873604    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-proxy-zcxl8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m54s
	  kube-system                 kube-scheduler-addons-873604             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 metrics-server-c59844bb4-q7zp5           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m50s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-shv8z          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     6m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m15s (x8 over 7m15s)  kubelet          Node addons-873604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m15s (x8 over 7m15s)  kubelet          Node addons-873604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m15s (x8 over 7m15s)  kubelet          Node addons-873604 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m8s                   kubelet          Node addons-873604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m8s                   kubelet          Node addons-873604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m8s                   kubelet          Node addons-873604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m55s                  node-controller  Node addons-873604 event: Registered Node addons-873604 in Controller
	  Normal  NodeReady                6m22s                  kubelet          Node addons-873604 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001122] FS-Cache: O-key=[8] '176fed0000000000'
	[  +0.000728] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000008d8d0d3c{9p.inode} n=00000000375612da
	[  +0.001057] FS-Cache: N-key=[8] '176fed0000000000'
	[  +0.002899] FS-Cache: Duplicate cookie detected
	[  +0.000729] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000969] FS-Cache: O-cookie d=000000008d8d0d3c{9p.inode} n=00000000222534be
	[  +0.001144] FS-Cache: O-key=[8] '176fed0000000000'
	[  +0.000797] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000974] FS-Cache: N-cookie d=000000008d8d0d3c{9p.inode} n=00000000efdc28ce
	[  +0.001067] FS-Cache: N-key=[8] '176fed0000000000'
	[  +2.738471] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=000000008d8d0d3c{9p.inode} n=0000000071b913e3
	[  +0.001171] FS-Cache: O-key=[8] '166fed0000000000'
	[  +0.000734] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=000000008d8d0d3c{9p.inode} n=00000000cfcbb435
	[  +0.001078] FS-Cache: N-key=[8] '166fed0000000000'
	[  +0.344778] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001021] FS-Cache: O-cookie d=000000008d8d0d3c{9p.inode} n=000000004c6b74e8
	[  +0.001035] FS-Cache: O-key=[8] '1c6fed0000000000'
	[  +0.000749] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=000000008d8d0d3c{9p.inode} n=00000000b0fc5245
	[  +0.001098] FS-Cache: N-key=[8] '1c6fed0000000000'
	
	
	==> etcd [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1] <==
	{"level":"info","ts":"2024-04-17T19:10:04.999902Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-17T19:10:04.999922Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-17T19:10:05.168422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-17T19:10:05.168475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-17T19:10:05.168501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-04-17T19:10:05.168518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-04-17T19:10:05.168525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-17T19:10:05.168542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-04-17T19:10:05.16855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-17T19:10:05.172531Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:10:05.17592Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-873604 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-17T19:10:05.176072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:10:05.176122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:10:05.178083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-17T19:10:05.17819Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:10:05.178267Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:10:05.178295Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:10:05.192451Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-17T19:10:05.192489Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-17T19:10:05.205482Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-04-17T19:10:26.102111Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.981924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-17T19:10:26.111306Z","caller":"traceutil/trace.go:171","msg":"trace[449486202] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"165.188546ms","start":"2024-04-17T19:10:25.946093Z","end":"2024-04-17T19:10:26.111282Z","steps":["trace[449486202] 'get authentication metadata'  (duration: 83.567129ms)","trace[449486202] 'range keys from in-memory index tree'  (duration: 72.168959ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-17T19:10:26.121079Z","caller":"traceutil/trace.go:171","msg":"trace[121349269] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"110.431613ms","start":"2024-04-17T19:10:26.010631Z","end":"2024-04-17T19:10:26.121063Z","steps":["trace[121349269] 'process raft request'  (duration: 48.993658ms)","trace[121349269] 'compare'  (duration: 51.638273ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:10:26.232148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.741996ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128028578057463715 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kindnet-xrsgr.17c726f712bdc733\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-xrsgr.17c726f712bdc733\" value_size:690 lease:8128028578057462975 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-17T19:10:26.232262Z","caller":"traceutil/trace.go:171","msg":"trace[9082437] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"133.441642ms","start":"2024-04-17T19:10:26.09881Z","end":"2024-04-17T19:10:26.232252Z","steps":["trace[9082437] 'process raft request'  (duration: 22.206244ms)"],"step_count":1}
	
	
	==> gcp-auth [f1ee6f9af795519a5b89e446d8d966b8898a1b9f77b9dcde765e3ab58ba288af] <==
	2024/04/17 19:11:48 GCP Auth Webhook started!
	2024/04/17 19:13:13 Ready to marshal response ...
	2024/04/17 19:13:13 Ready to write response ...
	2024/04/17 19:13:13 Ready to marshal response ...
	2024/04/17 19:13:13 Ready to write response ...
	2024/04/17 19:13:14 Ready to marshal response ...
	2024/04/17 19:13:14 Ready to write response ...
	2024/04/17 19:13:24 Ready to marshal response ...
	2024/04/17 19:13:24 Ready to write response ...
	2024/04/17 19:13:25 Ready to marshal response ...
	2024/04/17 19:13:25 Ready to write response ...
	2024/04/17 19:13:25 Ready to marshal response ...
	2024/04/17 19:13:25 Ready to write response ...
	2024/04/17 19:13:25 Ready to marshal response ...
	2024/04/17 19:13:25 Ready to write response ...
	2024/04/17 19:13:38 Ready to marshal response ...
	2024/04/17 19:13:38 Ready to write response ...
	2024/04/17 19:14:04 Ready to marshal response ...
	2024/04/17 19:14:04 Ready to write response ...
	2024/04/17 19:14:33 Ready to marshal response ...
	2024/04/17 19:14:33 Ready to write response ...
	2024/04/17 19:16:52 Ready to marshal response ...
	2024/04/17 19:16:52 Ready to write response ...
	
	
	==> kernel <==
	 19:17:19 up  2:59,  0 users,  load average: 0.43, 1.30, 2.16
	Linux addons-873604 5.15.0-1057-aws #63~20.04.1-Ubuntu SMP Mon Mar 25 10:29:14 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6] <==
	I0417 19:15:17.919807       1 main.go:227] handling current node
	I0417 19:15:27.923763       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:15:27.923797       1 main.go:227] handling current node
	I0417 19:15:37.935059       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:15:37.935086       1 main.go:227] handling current node
	I0417 19:15:47.945768       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:15:47.945797       1 main.go:227] handling current node
	I0417 19:15:57.956888       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:15:57.956918       1 main.go:227] handling current node
	I0417 19:16:07.969415       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:16:07.969447       1 main.go:227] handling current node
	I0417 19:16:17.981185       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:16:17.981217       1 main.go:227] handling current node
	I0417 19:16:27.985423       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:16:27.985454       1 main.go:227] handling current node
	I0417 19:16:37.995398       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:16:37.995427       1 main.go:227] handling current node
	I0417 19:16:48.010822       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:16:48.010867       1 main.go:227] handling current node
	I0417 19:16:58.014530       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:16:58.014562       1 main.go:227] handling current node
	I0417 19:17:08.024604       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:17:08.024637       1 main.go:227] handling current node
	I0417 19:17:18.041682       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:17:18.041842       1 main.go:227] handling current node
	
	
	==> kube-apiserver [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0417 19:12:28.056295       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.21.207:443/apis/metrics.k8s.io/v1beta1: Get "https://10.101.21.207:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.101.21.207:443: connect: connection refused
	I0417 19:12:28.122806       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0417 19:13:24.923913       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.183.223"}
	E0417 19:13:41.159963       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0417 19:13:49.676039       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0417 19:14:21.177494       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0417 19:14:21.177661       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0417 19:14:21.200660       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0417 19:14:21.201098       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0417 19:14:21.223391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0417 19:14:21.223436       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0417 19:14:21.239298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0417 19:14:21.239355       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0417 19:14:21.259825       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0417 19:14:21.261697       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0417 19:14:21.361640       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W0417 19:14:22.223604       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0417 19:14:22.260756       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0417 19:14:22.284496       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0417 19:14:28.027539       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0417 19:14:29.064495       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0417 19:14:33.621642       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0417 19:14:33.942910       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.125.79"}
	I0417 19:16:53.100622       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.90.120"}
	
	
	==> kube-controller-manager [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458] <==
	E0417 19:15:53.937712       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:16:13.952099       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:16:13.952159       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:16:19.130323       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:16:19.130361       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:16:26.407198       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:16:26.407238       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:16:51.049175       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:16:51.049220       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0417 19:16:52.862567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="38.795495ms"
	I0417 19:16:52.894830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="32.215803ms"
	I0417 19:16:52.896532       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="63.703µs"
	I0417 19:16:56.789470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="46.572µs"
	I0417 19:16:57.796872       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="43.404µs"
	I0417 19:16:58.799642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="44.906µs"
	W0417 19:17:05.952748       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:17:05.952793       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0417 19:17:10.493277       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0417 19:17:10.500516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="6.646µs"
	I0417 19:17:10.508281       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0417 19:17:11.828051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="40.655µs"
	W0417 19:17:12.671849       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:17:12.671891       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:17:17.443456       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:17:17.443505       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17] <==
	I0417 19:10:29.062651       1 server_linux.go:69] "Using iptables proxy"
	I0417 19:10:29.557588       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0417 19:10:30.059299       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0417 19:10:30.059442       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:10:30.131118       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0417 19:10:30.131408       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0417 19:10:30.131998       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:10:30.132362       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:10:30.132513       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:10:30.133803       1 config.go:192] "Starting service config controller"
	I0417 19:10:30.133919       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:10:30.134005       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:10:30.134050       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:10:30.139469       1 config.go:319] "Starting node config controller"
	I0417 19:10:30.140539       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:10:30.238057       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 19:10:30.249161       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:10:30.250075       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76] <==
	W0417 19:10:08.590406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0417 19:10:08.590758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0417 19:10:08.590446       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0417 19:10:08.590827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0417 19:10:08.590486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0417 19:10:08.590898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0417 19:10:08.594573       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0417 19:10:08.594682       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0417 19:10:09.416937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0417 19:10:09.416982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0417 19:10:09.458742       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0417 19:10:09.458872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0417 19:10:09.540585       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0417 19:10:09.540621       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0417 19:10:09.617942       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 19:10:09.617985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0417 19:10:09.687865       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0417 19:10:09.687925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0417 19:10:09.750333       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0417 19:10:09.750381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0417 19:10:09.750438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0417 19:10:09.750457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0417 19:10:09.762547       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0417 19:10:09.762590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0417 19:10:12.267357       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 17 19:17:07 addons-873604 kubelet[1495]: E0417 19:17:07.801171    1495 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(b4ebfb39-7e93-4561-9442-16bc8af64c70)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="b4ebfb39-7e93-4561-9442-16bc8af64c70"
	Apr 17 19:17:09 addons-873604 kubelet[1495]: I0417 19:17:09.053891    1495 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssd46\" (UniqueName: \"kubernetes.io/projected/b4ebfb39-7e93-4561-9442-16bc8af64c70-kube-api-access-ssd46\") pod \"b4ebfb39-7e93-4561-9442-16bc8af64c70\" (UID: \"b4ebfb39-7e93-4561-9442-16bc8af64c70\") "
	Apr 17 19:17:09 addons-873604 kubelet[1495]: I0417 19:17:09.059156    1495 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ebfb39-7e93-4561-9442-16bc8af64c70-kube-api-access-ssd46" (OuterVolumeSpecName: "kube-api-access-ssd46") pod "b4ebfb39-7e93-4561-9442-16bc8af64c70" (UID: "b4ebfb39-7e93-4561-9442-16bc8af64c70"). InnerVolumeSpecName "kube-api-access-ssd46". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 17 19:17:09 addons-873604 kubelet[1495]: I0417 19:17:09.154731    1495 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ssd46\" (UniqueName: \"kubernetes.io/projected/b4ebfb39-7e93-4561-9442-16bc8af64c70-kube-api-access-ssd46\") on node \"addons-873604\" DevicePath \"\""
	Apr 17 19:17:09 addons-873604 kubelet[1495]: I0417 19:17:09.807259    1495 scope.go:117] "RemoveContainer" containerID="65e7f2082c2003c0698e31931a82dec8d6334c9206bc31ac896546bd5df5ecf1"
	Apr 17 19:17:11 addons-873604 kubelet[1495]: I0417 19:17:11.008036    1495 scope.go:117] "RemoveContainer" containerID="e14e4a9881e6a4aba219b153f9de3518727a8ec080a1e5f20f922a1a8f9f13f5"
	Apr 17 19:17:11 addons-873604 kubelet[1495]: I0417 19:17:11.010482    1495 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0099fa2a-8f00-421d-879a-b6f484a85a25" path="/var/lib/kubelet/pods/0099fa2a-8f00-421d-879a-b6f484a85a25/volumes"
	Apr 17 19:17:11 addons-873604 kubelet[1495]: I0417 19:17:11.010879    1495 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ebfb39-7e93-4561-9442-16bc8af64c70" path="/var/lib/kubelet/pods/b4ebfb39-7e93-4561-9442-16bc8af64c70/volumes"
	Apr 17 19:17:11 addons-873604 kubelet[1495]: I0417 19:17:11.011296    1495 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d9b173-ac42-472b-888c-eb909bc73708" path="/var/lib/kubelet/pods/e9d9b173-ac42-472b-888c-eb909bc73708/volumes"
	Apr 17 19:17:11 addons-873604 kubelet[1495]: I0417 19:17:11.363001    1495 scope.go:117] "RemoveContainer" containerID="49d046365e9edc7f71acd5b88ad1622c71ac668101593363c6e790b1a2b24ab9"
	Apr 17 19:17:11 addons-873604 kubelet[1495]: I0417 19:17:11.382008    1495 scope.go:117] "RemoveContainer" containerID="e34e902ba4083159aa767e3cae6c7523b82061e83b3ce53eccf7df7a4783947d"
	Apr 17 19:17:11 addons-873604 kubelet[1495]: I0417 19:17:11.407927    1495 scope.go:117] "RemoveContainer" containerID="e14e4a9881e6a4aba219b153f9de3518727a8ec080a1e5f20f922a1a8f9f13f5"
	Apr 17 19:17:11 addons-873604 kubelet[1495]: I0417 19:17:11.813815    1495 scope.go:117] "RemoveContainer" containerID="ad98cabfcf2ba6a198b01d3dbf44641ae6ed038b147cb26975626bd56f81f706"
	Apr 17 19:17:11 addons-873604 kubelet[1495]: E0417 19:17:11.814083    1495 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-dpqqm_default(69350501-1ef3-4b48-8974-ca1451a9592f)\"" pod="default/hello-world-app-86c47465fc-dpqqm" podUID="69350501-1ef3-4b48-8974-ca1451a9592f"
	Apr 17 19:17:13 addons-873604 kubelet[1495]: I0417 19:17:13.818326    1495 scope.go:117] "RemoveContainer" containerID="0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6"
	Apr 17 19:17:13 addons-873604 kubelet[1495]: I0417 19:17:13.834655    1495 scope.go:117] "RemoveContainer" containerID="0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6"
	Apr 17 19:17:13 addons-873604 kubelet[1495]: E0417 19:17:13.835063    1495 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6\": container with ID starting with 0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6 not found: ID does not exist" containerID="0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6"
	Apr 17 19:17:13 addons-873604 kubelet[1495]: I0417 19:17:13.835104    1495 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6"} err="failed to get container status \"0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6\": rpc error: code = NotFound desc = could not find container \"0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6\": container with ID starting with 0814cf0bcc31b96bf5a15e8902e98280d3fb28faaa650944b3fa9d7afeb52ec6 not found: ID does not exist"
	Apr 17 19:17:13 addons-873604 kubelet[1495]: I0417 19:17:13.891622    1495 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7789c3a-6583-4932-9fcd-b05c5ebdd7fb-webhook-cert\") pod \"f7789c3a-6583-4932-9fcd-b05c5ebdd7fb\" (UID: \"f7789c3a-6583-4932-9fcd-b05c5ebdd7fb\") "
	Apr 17 19:17:13 addons-873604 kubelet[1495]: I0417 19:17:13.891676    1495 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-646v2\" (UniqueName: \"kubernetes.io/projected/f7789c3a-6583-4932-9fcd-b05c5ebdd7fb-kube-api-access-646v2\") pod \"f7789c3a-6583-4932-9fcd-b05c5ebdd7fb\" (UID: \"f7789c3a-6583-4932-9fcd-b05c5ebdd7fb\") "
	Apr 17 19:17:13 addons-873604 kubelet[1495]: I0417 19:17:13.893791    1495 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7789c3a-6583-4932-9fcd-b05c5ebdd7fb-kube-api-access-646v2" (OuterVolumeSpecName: "kube-api-access-646v2") pod "f7789c3a-6583-4932-9fcd-b05c5ebdd7fb" (UID: "f7789c3a-6583-4932-9fcd-b05c5ebdd7fb"). InnerVolumeSpecName "kube-api-access-646v2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 17 19:17:13 addons-873604 kubelet[1495]: I0417 19:17:13.896966    1495 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7789c3a-6583-4932-9fcd-b05c5ebdd7fb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f7789c3a-6583-4932-9fcd-b05c5ebdd7fb" (UID: "f7789c3a-6583-4932-9fcd-b05c5ebdd7fb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 17 19:17:13 addons-873604 kubelet[1495]: I0417 19:17:13.992521    1495 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7789c3a-6583-4932-9fcd-b05c5ebdd7fb-webhook-cert\") on node \"addons-873604\" DevicePath \"\""
	Apr 17 19:17:13 addons-873604 kubelet[1495]: I0417 19:17:13.992586    1495 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-646v2\" (UniqueName: \"kubernetes.io/projected/f7789c3a-6583-4932-9fcd-b05c5ebdd7fb-kube-api-access-646v2\") on node \"addons-873604\" DevicePath \"\""
	Apr 17 19:17:15 addons-873604 kubelet[1495]: I0417 19:17:15.015441    1495 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7789c3a-6583-4932-9fcd-b05c5ebdd7fb" path="/var/lib/kubelet/pods/f7789c3a-6583-4932-9fcd-b05c5ebdd7fb/volumes"
	
	
	==> storage-provisioner [348b13f6f5fc01b2dfacdc4caf00f99f248cb3578ddda6d5b6c3305ee786cfd0] <==
	I0417 19:10:58.983250       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0417 19:10:59.005117       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0417 19:10:59.005264       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0417 19:10:59.016530       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0417 19:10:59.016656       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"380d4bd5-c6be-4a43-a605-42fa0a26edb0", APIVersion:"v1", ResourceVersion:"929", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-873604_23055689-08a4-4681-b2fd-6136f51d4e9b became leader
	I0417 19:10:59.027267       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-873604_23055689-08a4-4681-b2fd-6136f51d4e9b!
	I0417 19:10:59.127664       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-873604_23055689-08a4-4681-b2fd-6136f51d4e9b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-873604 -n addons-873604
helpers_test.go:261: (dbg) Run:  kubectl --context addons-873604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (166.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (364.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 7.019053ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-q7zp5" [da8a5501-6baf-4977-905c-f81fe98110e2] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007022566s
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (127.705814ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 3m49.522693827s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (96.327932ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 3m52.297809006s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (87.114884ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 3m55.127745923s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (107.191338ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 3m59.337338705s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (116.247382ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 4m6.271107166s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (92.804027ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 4m16.476574413s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (86.453587ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 4m32.289672695s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (91.397496ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 5m1.969474667s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (101.67778ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 5m55.923739658s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (87.235291ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 6m48.246687486s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (98.677219ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 7m42.437202042s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (86.986971ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 8m12.620715532s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (87.503483ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 9m8.846145661s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-873604 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-873604 top pods -n kube-system: exit status 1 (90.9176ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tf89r, age: 9m45.866854993s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-873604
helpers_test.go:235: (dbg) docker inspect addons-873604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631",
	        "Created": "2024-04-17T19:09:49.125765957Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 694625,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-17T19:09:49.423029844Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f315bc3928e1aa212ec64171b55477a58b0d51266c0204d2cba9566780672a72",
	        "ResolvConfPath": "/var/lib/docker/containers/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631/hostname",
	        "HostsPath": "/var/lib/docker/containers/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631/hosts",
	        "LogPath": "/var/lib/docker/containers/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631-json.log",
	        "Name": "/addons-873604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-873604:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-873604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/78dc28b2ff51c773663d53c5f240b185b7934174d1c6b0e71b638383d84f6917-init/diff:/var/lib/docker/overlay2/05d9d5befaed30420d7a8f984a07ae80fc52626598e920d0ade8d12271084d40/diff",
	                "MergedDir": "/var/lib/docker/overlay2/78dc28b2ff51c773663d53c5f240b185b7934174d1c6b0e71b638383d84f6917/merged",
	                "UpperDir": "/var/lib/docker/overlay2/78dc28b2ff51c773663d53c5f240b185b7934174d1c6b0e71b638383d84f6917/diff",
	                "WorkDir": "/var/lib/docker/overlay2/78dc28b2ff51c773663d53c5f240b185b7934174d1c6b0e71b638383d84f6917/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-873604",
	                "Source": "/var/lib/docker/volumes/addons-873604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-873604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-873604",
	                "name.minikube.sigs.k8s.io": "addons-873604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "59a62741523b0d53182d92c517313e971ec25e810c63d437a531cc275f9f2bae",
	            "SandboxKey": "/var/run/docker/netns/59a62741523b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33542"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33541"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-873604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "0a9713c7292f458ca427fec72e9fdc386354489a98c0b20a7bca9591b589d0e2",
	                    "EndpointID": "6a1c0e59d2410f76f96505693890c8286a22d3834d47be002500ac43f5895edf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-873604",
	                        "3fc24619954a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-873604 -n addons-873604
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-873604 logs -n 25: (1.58663008s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-545184                                                                     | download-only-545184   | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| delete  | -p download-only-251262                                                                     | download-only-251262   | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| delete  | -p download-only-545184                                                                     | download-only-545184   | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| start   | --download-only -p                                                                          | download-docker-474356 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC |                     |
	|         | download-docker-474356                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p download-docker-474356                                                                   | download-docker-474356 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-250898   | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC |                     |
	|         | binary-mirror-250898                                                                        |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |                |                     |                     |
	|         | http://127.0.0.1:33811                                                                      |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-250898                                                                     | binary-mirror-250898   | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| addons  | disable dashboard -p                                                                        | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC |                     |
	|         | addons-873604                                                                               |                        |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC |                     |
	|         | addons-873604                                                                               |                        |         |                |                     |                     |
	| start   | -p addons-873604 --wait=true                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |                |                     |                     |
	|         | --addons=registry                                                                           |                        |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	|         | -p addons-873604                                                                            |                        |         |                |                     |                     |
	| ip      | addons-873604 ip                                                                            | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	| addons  | addons-873604 addons disable                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	|         | addons-873604                                                                               |                        |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	|         | -p addons-873604                                                                            |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| ssh     | addons-873604 ssh cat                                                                       | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:13 UTC |
	|         | /opt/local-path-provisioner/pvc-814c2d54-9fef-4b2f-bb69-2330200001c7_default_test-pvc/file1 |                        |         |                |                     |                     |
	| addons  | addons-873604 addons disable                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:13 UTC | 17 Apr 24 19:14 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-873604 addons                                                                        | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:14 UTC | 17 Apr 24 19:14 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-873604 addons                                                                        | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:14 UTC | 17 Apr 24 19:14 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:14 UTC | 17 Apr 24 19:14 UTC |
	|         | addons-873604                                                                               |                        |         |                |                     |                     |
	| ssh     | addons-873604 ssh curl -s                                                                   | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:14 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |                |                     |                     |
	| ip      | addons-873604 ip                                                                            | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:16 UTC | 17 Apr 24 19:16 UTC |
	| addons  | addons-873604 addons disable                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:17 UTC | 17 Apr 24 19:17 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-873604 addons disable                                                                | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:17 UTC | 17 Apr 24 19:17 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |                |                     |                     |
	| addons  | addons-873604 addons                                                                        | addons-873604          | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | disable metrics-server                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 19:09:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 19:09:24.963608  694161 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:09:24.963778  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:09:24.963786  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:09:24.963791  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:09:24.964107  694161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
	I0417 19:09:24.964776  694161 out.go:298] Setting JSON to false
	I0417 19:09:24.965912  694161 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10312,"bootTime":1713370653,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0417 19:09:24.965985  694161 start.go:139] virtualization:  
	I0417 19:09:24.969266  694161 out.go:177] * [addons-873604] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0417 19:09:24.972476  694161 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:09:24.975011  694161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:09:24.972542  694161 notify.go:220] Checking for updates...
	I0417 19:09:24.977063  694161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	I0417 19:09:24.979166  694161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	I0417 19:09:24.981356  694161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0417 19:09:24.983887  694161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:09:24.986496  694161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:09:25.017092  694161 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0417 19:09:25.017230  694161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:09:25.080357  694161 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-17 19:09:25.068772477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:09:25.080496  694161 docker.go:295] overlay module found
	I0417 19:09:25.083264  694161 out.go:177] * Using the docker driver based on user configuration
	I0417 19:09:25.085875  694161 start.go:297] selected driver: docker
	I0417 19:09:25.085901  694161 start.go:901] validating driver "docker" against <nil>
	I0417 19:09:25.085916  694161 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:09:25.086605  694161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:09:25.147547  694161 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-17 19:09:25.138652526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:09:25.147723  694161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 19:09:25.147953  694161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:09:25.150664  694161 out.go:177] * Using Docker driver with root privileges
	I0417 19:09:25.153267  694161 cni.go:84] Creating CNI manager for ""
	I0417 19:09:25.153293  694161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0417 19:09:25.153304  694161 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0417 19:09:25.153409  694161 start.go:340] cluster config:
	{Name:addons-873604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-873604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:09:25.157220  694161 out.go:177] * Starting "addons-873604" primary control-plane node in "addons-873604" cluster
	I0417 19:09:25.159845  694161 cache.go:121] Beginning downloading kic base image for docker with crio
	I0417 19:09:25.162538  694161 out.go:177] * Pulling base image v0.0.43-1713236840-18649 ...
	I0417 19:09:25.165381  694161 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:09:25.165446  694161 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0417 19:09:25.165468  694161 cache.go:56] Caching tarball of preloaded images
	I0417 19:09:25.165508  694161 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e in local docker daemon
	I0417 19:09:25.165587  694161 preload.go:173] Found /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0417 19:09:25.165599  694161 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 19:09:25.165971  694161 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/config.json ...
	I0417 19:09:25.165995  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/config.json: {Name:mk21e21ce2e4cd3b7058fdf531f3edbc9d07af39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:25.179811  694161 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e to local cache
	I0417 19:09:25.179942  694161 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e in local cache directory
	I0417 19:09:25.179968  694161 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e in local cache directory, skipping pull
	I0417 19:09:25.179973  694161 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e exists in cache, skipping pull
	I0417 19:09:25.179985  694161 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e as a tarball
	I0417 19:09:25.179995  694161 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e from local cache
	I0417 19:09:41.842808  694161 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e from cached tarball
	I0417 19:09:41.842846  694161 cache.go:194] Successfully downloaded all kic artifacts
	I0417 19:09:41.842887  694161 start.go:360] acquireMachinesLock for addons-873604: {Name:mk9f3554f23e850971a17136b150084dad1ed5dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:09:41.843015  694161 start.go:364] duration metric: took 104.67µs to acquireMachinesLock for "addons-873604"
	I0417 19:09:41.843046  694161 start.go:93] Provisioning new machine with config: &{Name:addons-873604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-873604 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 19:09:41.843127  694161 start.go:125] createHost starting for "" (driver="docker")
	I0417 19:09:41.846368  694161 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0417 19:09:41.846610  694161 start.go:159] libmachine.API.Create for "addons-873604" (driver="docker")
	I0417 19:09:41.846644  694161 client.go:168] LocalClient.Create starting
	I0417 19:09:41.846757  694161 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem
	I0417 19:09:42.095468  694161 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/cert.pem
	I0417 19:09:42.558138  694161 cli_runner.go:164] Run: docker network inspect addons-873604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0417 19:09:42.573000  694161 cli_runner.go:211] docker network inspect addons-873604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0417 19:09:42.573100  694161 network_create.go:281] running [docker network inspect addons-873604] to gather additional debugging logs...
	I0417 19:09:42.573124  694161 cli_runner.go:164] Run: docker network inspect addons-873604
	W0417 19:09:42.591114  694161 cli_runner.go:211] docker network inspect addons-873604 returned with exit code 1
	I0417 19:09:42.591149  694161 network_create.go:284] error running [docker network inspect addons-873604]: docker network inspect addons-873604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-873604 not found
	I0417 19:09:42.591164  694161 network_create.go:286] output of [docker network inspect addons-873604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-873604 not found
	
	** /stderr **
	I0417 19:09:42.591282  694161 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0417 19:09:42.607103  694161 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400299a2d0}
	I0417 19:09:42.607146  694161 network_create.go:124] attempt to create docker network addons-873604 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0417 19:09:42.607210  694161 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-873604 addons-873604
	I0417 19:09:42.689853  694161 network_create.go:108] docker network addons-873604 192.168.49.0/24 created
	I0417 19:09:42.689886  694161 kic.go:121] calculated static IP "192.168.49.2" for the "addons-873604" container
	I0417 19:09:42.689958  694161 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0417 19:09:42.702542  694161 cli_runner.go:164] Run: docker volume create addons-873604 --label name.minikube.sigs.k8s.io=addons-873604 --label created_by.minikube.sigs.k8s.io=true
	I0417 19:09:42.717123  694161 oci.go:103] Successfully created a docker volume addons-873604
	I0417 19:09:42.717222  694161 cli_runner.go:164] Run: docker run --rm --name addons-873604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-873604 --entrypoint /usr/bin/test -v addons-873604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e -d /var/lib
	I0417 19:09:44.764651  694161 cli_runner.go:217] Completed: docker run --rm --name addons-873604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-873604 --entrypoint /usr/bin/test -v addons-873604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e -d /var/lib: (2.047387664s)
	I0417 19:09:44.764683  694161 oci.go:107] Successfully prepared a docker volume addons-873604
	I0417 19:09:44.764710  694161 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:09:44.764729  694161 kic.go:194] Starting extracting preloaded images to volume ...
	I0417 19:09:44.764823  694161 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-873604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e -I lz4 -xf /preloaded.tar -C /extractDir
	I0417 19:09:49.057467  694161 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-873604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e -I lz4 -xf /preloaded.tar -C /extractDir: (4.292594327s)
	I0417 19:09:49.057509  694161 kic.go:203] duration metric: took 4.292776197s to extract preloaded images to volume ...
	W0417 19:09:49.057649  694161 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0417 19:09:49.057758  694161 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0417 19:09:49.112878  694161 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-873604 --name addons-873604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-873604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-873604 --network addons-873604 --ip 192.168.49.2 --volume addons-873604:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e
	I0417 19:09:49.434684  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Running}}
	I0417 19:09:49.457390  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:09:49.476716  694161 cli_runner.go:164] Run: docker exec addons-873604 stat /var/lib/dpkg/alternatives/iptables
	I0417 19:09:49.558792  694161 oci.go:144] the created container "addons-873604" has a running status.
	I0417 19:09:49.558823  694161 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa...
	I0417 19:09:50.362407  694161 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0417 19:09:50.380907  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:09:50.397804  694161 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0417 19:09:50.397829  694161 kic_runner.go:114] Args: [docker exec --privileged addons-873604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0417 19:09:50.451523  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:09:50.480714  694161 machine.go:94] provisionDockerMachine start ...
	I0417 19:09:50.480883  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:50.498931  694161 main.go:141] libmachine: Using SSH client type: native
	I0417 19:09:50.499208  694161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33542 <nil> <nil>}
	I0417 19:09:50.499217  694161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0417 19:09:50.647969  694161 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-873604
	
	I0417 19:09:50.647996  694161 ubuntu.go:169] provisioning hostname "addons-873604"
	I0417 19:09:50.648060  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:50.664784  694161 main.go:141] libmachine: Using SSH client type: native
	I0417 19:09:50.665033  694161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33542 <nil> <nil>}
	I0417 19:09:50.665050  694161 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-873604 && echo "addons-873604" | sudo tee /etc/hostname
	I0417 19:09:50.818507  694161 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-873604
	
	I0417 19:09:50.818587  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:50.835045  694161 main.go:141] libmachine: Using SSH client type: native
	I0417 19:09:50.835294  694161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33542 <nil> <nil>}
	I0417 19:09:50.835315  694161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-873604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-873604/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-873604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 19:09:50.972315  694161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 19:09:50.972344  694161 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18665-688109/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-688109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-688109/.minikube}
	I0417 19:09:50.972372  694161 ubuntu.go:177] setting up certificates
	I0417 19:09:50.972409  694161 provision.go:84] configureAuth start
	I0417 19:09:50.972474  694161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-873604
	I0417 19:09:50.988661  694161 provision.go:143] copyHostCerts
	I0417 19:09:50.988756  694161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-688109/.minikube/ca.pem (1078 bytes)
	I0417 19:09:50.988887  694161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-688109/.minikube/cert.pem (1123 bytes)
	I0417 19:09:50.988959  694161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-688109/.minikube/key.pem (1675 bytes)
	I0417 19:09:50.989028  694161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-688109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca-key.pem org=jenkins.addons-873604 san=[127.0.0.1 192.168.49.2 addons-873604 localhost minikube]
	I0417 19:09:51.400106  694161 provision.go:177] copyRemoteCerts
	I0417 19:09:51.400172  694161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 19:09:51.400211  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:51.415117  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:09:51.513429  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 19:09:51.537796  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0417 19:09:51.562484  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0417 19:09:51.586537  694161 provision.go:87] duration metric: took 614.110058ms to configureAuth
	I0417 19:09:51.586563  694161 ubuntu.go:193] setting minikube options for container-runtime
	I0417 19:09:51.586759  694161 config.go:182] Loaded profile config "addons-873604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:09:51.586863  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:51.602459  694161 main.go:141] libmachine: Using SSH client type: native
	I0417 19:09:51.602721  694161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33542 <nil> <nil>}
	I0417 19:09:51.602735  694161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 19:09:51.844026  694161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 19:09:51.844098  694161 machine.go:97] duration metric: took 1.363301107s to provisionDockerMachine
	I0417 19:09:51.844122  694161 client.go:171] duration metric: took 9.99746643s to LocalClient.Create
	I0417 19:09:51.844148  694161 start.go:167] duration metric: took 9.997537616s to libmachine.API.Create "addons-873604"
	I0417 19:09:51.844189  694161 start.go:293] postStartSetup for "addons-873604" (driver="docker")
	I0417 19:09:51.844215  694161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 19:09:51.844304  694161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 19:09:51.844438  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:51.861022  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:09:51.961927  694161 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 19:09:51.965211  694161 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0417 19:09:51.965246  694161 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0417 19:09:51.965257  694161 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0417 19:09:51.965264  694161 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0417 19:09:51.965275  694161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-688109/.minikube/addons for local assets ...
	I0417 19:09:51.965347  694161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-688109/.minikube/files for local assets ...
	I0417 19:09:51.965379  694161 start.go:296] duration metric: took 121.171533ms for postStartSetup
	I0417 19:09:51.965717  694161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-873604
	I0417 19:09:51.980683  694161 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/config.json ...
	I0417 19:09:51.980975  694161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:09:51.981028  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:51.995950  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:09:52.089217  694161 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0417 19:09:52.093637  694161 start.go:128] duration metric: took 10.250495111s to createHost
	I0417 19:09:52.093661  694161 start.go:83] releasing machines lock for "addons-873604", held for 10.25063269s
	I0417 19:09:52.093730  694161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-873604
	I0417 19:09:52.108788  694161 ssh_runner.go:195] Run: cat /version.json
	I0417 19:09:52.108849  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:52.108866  694161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 19:09:52.108921  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:09:52.128282  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:09:52.138493  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:09:52.223893  694161 ssh_runner.go:195] Run: systemctl --version
	I0417 19:09:52.336035  694161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 19:09:52.477393  694161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0417 19:09:52.481753  694161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 19:09:52.502207  694161 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0417 19:09:52.502369  694161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 19:09:52.540312  694161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0417 19:09:52.540333  694161 start.go:494] detecting cgroup driver to use...
	I0417 19:09:52.540365  694161 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0417 19:09:52.540441  694161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 19:09:52.558312  694161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 19:09:52.570537  694161 docker.go:217] disabling cri-docker service (if available) ...
	I0417 19:09:52.570599  694161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 19:09:52.584893  694161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 19:09:52.604347  694161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 19:09:52.689853  694161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 19:09:52.793560  694161 docker.go:233] disabling docker service ...
	I0417 19:09:52.793644  694161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 19:09:52.813607  694161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 19:09:52.825407  694161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 19:09:52.916033  694161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 19:09:53.008303  694161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 19:09:53.022307  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 19:09:53.039760  694161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 19:09:53.039838  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.050324  694161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 19:09:53.050394  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.060112  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.070226  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.080331  694161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 19:09:53.089381  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.099115  694161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.114224  694161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:09:53.123669  694161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 19:09:53.132262  694161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 19:09:53.140587  694161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:09:53.227395  694161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 19:09:53.334187  694161 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 19:09:53.334307  694161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 19:09:53.337831  694161 start.go:562] Will wait 60s for crictl version
	I0417 19:09:53.337940  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:09:53.341590  694161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 19:09:53.380802  694161 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0417 19:09:53.381018  694161 ssh_runner.go:195] Run: crio --version
	I0417 19:09:53.424797  694161 ssh_runner.go:195] Run: crio --version
	I0417 19:09:53.471021  694161 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.24.6 ...
	I0417 19:09:53.472703  694161 cli_runner.go:164] Run: docker network inspect addons-873604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0417 19:09:53.486410  694161 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0417 19:09:53.490022  694161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 19:09:53.501028  694161 kubeadm.go:877] updating cluster {Name:addons-873604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-873604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 19:09:53.501156  694161 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:09:53.501237  694161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:09:53.591376  694161 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:09:53.591398  694161 crio.go:433] Images already preloaded, skipping extraction
	I0417 19:09:53.591458  694161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:09:53.630564  694161 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:09:53.630588  694161 cache_images.go:84] Images are preloaded, skipping loading
	I0417 19:09:53.630597  694161 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0-rc.2 crio true true} ...
	I0417 19:09:53.630693  694161 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-873604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-873604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 19:09:53.630775  694161 ssh_runner.go:195] Run: crio config
	I0417 19:09:53.678502  694161 cni.go:84] Creating CNI manager for ""
	I0417 19:09:53.678527  694161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0417 19:09:53.678542  694161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 19:09:53.678566  694161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-873604 NodeName:addons-873604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 19:09:53.678725  694161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-873604"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 19:09:53.678796  694161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 19:09:53.687479  694161 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 19:09:53.687546  694161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0417 19:09:53.696224  694161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0417 19:09:53.714264  694161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 19:09:53.732898  694161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0417 19:09:53.751430  694161 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0417 19:09:53.754843  694161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 19:09:53.765912  694161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:09:53.848635  694161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:09:53.862733  694161 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604 for IP: 192.168.49.2
	I0417 19:09:53.862797  694161 certs.go:194] generating shared ca certs ...
	I0417 19:09:53.862828  694161 certs.go:226] acquiring lock for ca certs: {Name:mk1d5cdf338d4da229e545e5e63248dcc873d21d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:53.862980  694161 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-688109/.minikube/ca.key
	I0417 19:09:54.045381  694161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-688109/.minikube/ca.crt ...
	I0417 19:09:54.045416  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/ca.crt: {Name:mk93cd65d0c6dce70744e607a147811e84a5870d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.046229  694161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-688109/.minikube/ca.key ...
	I0417 19:09:54.046249  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/ca.key: {Name:mkd68f826a3f0fe60b7fe39e9894fdc502e1006d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.046353  694161 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.key
	I0417 19:09:54.619411  694161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.crt ...
	I0417 19:09:54.619450  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.crt: {Name:mkcf59b20b0c3249f1bca795a6e74d934bed98f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.619667  694161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.key ...
	I0417 19:09:54.619683  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.key: {Name:mk686c4b044fb5ee0d53aa4e8e625235d31d933f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.620394  694161 certs.go:256] generating profile certs ...
	I0417 19:09:54.620465  694161 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.key
	I0417 19:09:54.620485  694161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt with IP's: []
	I0417 19:09:54.825180  694161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt ...
	I0417 19:09:54.825209  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: {Name:mk61512f97c3a1aaa9ce05997d4f70e7a008ab1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.825404  694161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.key ...
	I0417 19:09:54.825420  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.key: {Name:mk42d4d44866af94905680123bce0f356f164dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:54.826053  694161 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key.5856b0aa
	I0417 19:09:54.826079  694161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt.5856b0aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0417 19:09:55.183901  694161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt.5856b0aa ...
	I0417 19:09:55.183931  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt.5856b0aa: {Name:mkd86a7b83a2852731ea06780d6308bc7c3bfafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:55.184131  694161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key.5856b0aa ...
	I0417 19:09:55.184146  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key.5856b0aa: {Name:mk8731f1eae302d33cee016b798929c1117d2483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:55.184230  694161 certs.go:381] copying /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt.5856b0aa -> /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt
	I0417 19:09:55.184316  694161 certs.go:385] copying /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key.5856b0aa -> /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key
	I0417 19:09:55.184370  694161 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.key
	I0417 19:09:55.184408  694161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.crt with IP's: []
	I0417 19:09:55.350368  694161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.crt ...
	I0417 19:09:55.350399  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.crt: {Name:mk73336e1835358968b7d66605ff7f94d6435bec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:55.350590  694161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.key ...
	I0417 19:09:55.350605  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.key: {Name:mkf6cf392164f5a780c94e94a48e3e83db8574be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:09:55.350793  694161 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 19:09:55.350838  694161 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/ca.pem (1078 bytes)
	I0417 19:09:55.350878  694161 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/cert.pem (1123 bytes)
	I0417 19:09:55.350907  694161 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-688109/.minikube/certs/key.pem (1675 bytes)
	I0417 19:09:55.351499  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 19:09:55.378424  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 19:09:55.404046  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 19:09:55.428519  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 19:09:55.453260  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0417 19:09:55.477437  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 19:09:55.501871  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 19:09:55.526404  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0417 19:09:55.551335  694161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-688109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 19:09:55.576785  694161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 19:09:55.595384  694161 ssh_runner.go:195] Run: openssl version
	I0417 19:09:55.600830  694161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 19:09:55.610401  694161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:09:55.613931  694161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 19:09 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:09:55.613998  694161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:09:55.620906  694161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 19:09:55.630445  694161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:09:55.634051  694161 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 19:09:55.634099  694161 kubeadm.go:391] StartCluster: {Name:addons-873604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-873604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:09:55.634226  694161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 19:09:55.634322  694161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:09:55.673154  694161 cri.go:89] found id: ""
	I0417 19:09:55.673273  694161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0417 19:09:55.682155  694161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0417 19:09:55.691129  694161 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0417 19:09:55.691226  694161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0417 19:09:55.700053  694161 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0417 19:09:55.700075  694161 kubeadm.go:156] found existing configuration files:
	
	I0417 19:09:55.700145  694161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0417 19:09:55.709260  694161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0417 19:09:55.709353  694161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0417 19:09:55.717896  694161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0417 19:09:55.726678  694161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0417 19:09:55.726745  694161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0417 19:09:55.735788  694161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0417 19:09:55.745158  694161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0417 19:09:55.745226  694161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0417 19:09:55.753854  694161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0417 19:09:55.762767  694161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0417 19:09:55.762878  694161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0417 19:09:55.771438  694161 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0417 19:09:55.818215  694161 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0417 19:09:55.818465  694161 kubeadm.go:309] [preflight] Running pre-flight checks
	I0417 19:09:55.859328  694161 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0417 19:09:55.859400  694161 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1057-aws
	I0417 19:09:55.859439  694161 kubeadm.go:309] OS: Linux
	I0417 19:09:55.859494  694161 kubeadm.go:309] CGROUPS_CPU: enabled
	I0417 19:09:55.859544  694161 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0417 19:09:55.859593  694161 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0417 19:09:55.859642  694161 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0417 19:09:55.859690  694161 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0417 19:09:55.859739  694161 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0417 19:09:55.859785  694161 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0417 19:09:55.859834  694161 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0417 19:09:55.859881  694161 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0417 19:09:55.930155  694161 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0417 19:09:55.930267  694161 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0417 19:09:55.930360  694161 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0417 19:09:56.191102  694161 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0417 19:09:56.193987  694161 out.go:204]   - Generating certificates and keys ...
	I0417 19:09:56.194095  694161 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0417 19:09:56.194174  694161 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0417 19:09:56.624005  694161 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0417 19:09:57.316330  694161 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0417 19:09:57.776706  694161 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0417 19:09:58.506835  694161 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0417 19:09:58.796796  694161 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0417 19:09:58.797127  694161 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-873604 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0417 19:09:59.088963  694161 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0417 19:09:59.089258  694161 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-873604 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0417 19:09:59.485242  694161 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0417 19:09:59.889216  694161 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0417 19:10:00.470845  694161 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0417 19:10:00.485669  694161 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0417 19:10:01.378507  694161 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0417 19:10:01.982884  694161 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0417 19:10:02.268403  694161 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0417 19:10:02.722430  694161 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0417 19:10:03.104831  694161 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0417 19:10:03.105777  694161 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0417 19:10:03.109186  694161 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0417 19:10:03.111961  694161 out.go:204]   - Booting up control plane ...
	I0417 19:10:03.112075  694161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0417 19:10:03.112163  694161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0417 19:10:03.114553  694161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0417 19:10:03.126333  694161 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0417 19:10:03.127511  694161 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0417 19:10:03.127567  694161 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0417 19:10:03.228776  694161 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0417 19:10:03.228865  694161 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0417 19:10:04.243034  694161 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.014338995s
	I0417 19:10:04.243120  694161 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0417 19:10:10.247345  694161 kubeadm.go:309] [api-check] The API server is healthy after 6.002158165s
	I0417 19:10:10.264987  694161 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0417 19:10:10.285762  694161 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0417 19:10:10.320943  694161 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0417 19:10:10.321166  694161 kubeadm.go:309] [mark-control-plane] Marking the node addons-873604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0417 19:10:10.331103  694161 kubeadm.go:309] [bootstrap-token] Using token: f332dj.4fi44gqjkjxhwrp9
	I0417 19:10:10.333195  694161 out.go:204]   - Configuring RBAC rules ...
	I0417 19:10:10.333334  694161 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0417 19:10:10.337964  694161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0417 19:10:10.347103  694161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0417 19:10:10.350937  694161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0417 19:10:10.357386  694161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0417 19:10:10.361086  694161 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0417 19:10:10.651393  694161 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0417 19:10:11.098674  694161 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0417 19:10:11.652717  694161 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0417 19:10:11.653986  694161 kubeadm.go:309] 
	I0417 19:10:11.654057  694161 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0417 19:10:11.654068  694161 kubeadm.go:309] 
	I0417 19:10:11.654143  694161 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0417 19:10:11.654151  694161 kubeadm.go:309] 
	I0417 19:10:11.654177  694161 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0417 19:10:11.654237  694161 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0417 19:10:11.654293  694161 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0417 19:10:11.654306  694161 kubeadm.go:309] 
	I0417 19:10:11.654357  694161 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0417 19:10:11.654365  694161 kubeadm.go:309] 
	I0417 19:10:11.654411  694161 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0417 19:10:11.654420  694161 kubeadm.go:309] 
	I0417 19:10:11.654470  694161 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0417 19:10:11.654548  694161 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0417 19:10:11.654618  694161 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0417 19:10:11.654626  694161 kubeadm.go:309] 
	I0417 19:10:11.654707  694161 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0417 19:10:11.654784  694161 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0417 19:10:11.654792  694161 kubeadm.go:309] 
	I0417 19:10:11.654873  694161 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token f332dj.4fi44gqjkjxhwrp9 \
	I0417 19:10:11.654975  694161 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64e6df13a2dfd9033b0e1d5e98b3cfd2efe34f46e411a8fa9e48d2f90687e6a8 \
	I0417 19:10:11.655000  694161 kubeadm.go:309] 	--control-plane 
	I0417 19:10:11.655005  694161 kubeadm.go:309] 
	I0417 19:10:11.655090  694161 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0417 19:10:11.655098  694161 kubeadm.go:309] 
	I0417 19:10:11.655177  694161 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token f332dj.4fi44gqjkjxhwrp9 \
	I0417 19:10:11.655280  694161 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:64e6df13a2dfd9033b0e1d5e98b3cfd2efe34f46e411a8fa9e48d2f90687e6a8 
	I0417 19:10:11.658815  694161 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1057-aws\n", err: exit status 1
	I0417 19:10:11.658933  694161 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0417 19:10:11.658953  694161 cni.go:84] Creating CNI manager for ""
	I0417 19:10:11.658961  694161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0417 19:10:11.661539  694161 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0417 19:10:11.663688  694161 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0417 19:10:11.667622  694161 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl ...
	I0417 19:10:11.667647  694161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0417 19:10:11.687434  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0417 19:10:11.983629  694161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0417 19:10:11.983692  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:11.983795  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-873604 minikube.k8s.io/updated_at=2024_04_17T19_10_11_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3 minikube.k8s.io/name=addons-873604 minikube.k8s.io/primary=true
	I0417 19:10:12.162299  694161 ops.go:34] apiserver oom_adj: -16
	I0417 19:10:12.162392  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:12.662843  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:13.162486  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:13.663177  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:14.162571  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:14.662489  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:15.163090  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:15.663278  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:16.162484  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:16.662935  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:17.163308  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:17.662634  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:18.163007  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:18.662559  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:19.163438  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:19.662558  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:20.162530  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:20.662873  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:21.163519  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:21.663100  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:22.162980  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:22.662551  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:23.162549  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:23.662657  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:24.162597  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:24.663394  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:25.162860  694161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 19:10:25.268925  694161 kubeadm.go:1107] duration metric: took 13.285299404s to wait for elevateKubeSystemPrivileges
	W0417 19:10:25.268959  694161 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0417 19:10:25.268967  694161 kubeadm.go:393] duration metric: took 29.634872006s to StartCluster
	I0417 19:10:25.268983  694161 settings.go:142] acquiring lock: {Name:mkca3c46bd90bd66268d8c5f3823c8842153ebd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:10:25.269101  694161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18665-688109/kubeconfig
	I0417 19:10:25.269573  694161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-688109/kubeconfig: {Name:mk9d670643a338e225544addd9a80feeadd71982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:10:25.270578  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0417 19:10:25.270618  694161 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 19:10:25.273454  694161 out.go:177] * Verifying Kubernetes components...
	I0417 19:10:25.270849  694161 config.go:182] Loaded profile config "addons-873604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:10:25.270859  694161 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0417 19:10:25.275367  694161 addons.go:69] Setting yakd=true in profile "addons-873604"
	I0417 19:10:25.275381  694161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:10:25.275397  694161 addons.go:234] Setting addon yakd=true in "addons-873604"
	I0417 19:10:25.275428  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.275471  694161 addons.go:69] Setting ingress-dns=true in profile "addons-873604"
	I0417 19:10:25.275493  694161 addons.go:234] Setting addon ingress-dns=true in "addons-873604"
	I0417 19:10:25.275523  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.275910  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.275916  694161 addons.go:69] Setting inspektor-gadget=true in profile "addons-873604"
	I0417 19:10:25.275934  694161 addons.go:234] Setting addon inspektor-gadget=true in "addons-873604"
	I0417 19:10:25.275951  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.276282  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.277235  694161 addons.go:69] Setting metrics-server=true in profile "addons-873604"
	I0417 19:10:25.277286  694161 addons.go:69] Setting cloud-spanner=true in profile "addons-873604"
	I0417 19:10:25.277305  694161 addons.go:234] Setting addon metrics-server=true in "addons-873604"
	I0417 19:10:25.277310  694161 addons.go:234] Setting addon cloud-spanner=true in "addons-873604"
	I0417 19:10:25.277337  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.277342  694161 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-873604"
	I0417 19:10:25.277376  694161 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-873604"
	I0417 19:10:25.277391  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.277751  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.277337  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.278084  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.277751  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.286027  694161 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-873604"
	I0417 19:10:25.286067  694161 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-873604"
	I0417 19:10:25.286111  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.286535  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.288671  694161 addons.go:69] Setting default-storageclass=true in profile "addons-873604"
	I0417 19:10:25.288714  694161 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-873604"
	I0417 19:10:25.289006  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.289447  694161 addons.go:69] Setting registry=true in profile "addons-873604"
	I0417 19:10:25.289482  694161 addons.go:234] Setting addon registry=true in "addons-873604"
	I0417 19:10:25.289521  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.289904  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.315876  694161 addons.go:69] Setting storage-provisioner=true in profile "addons-873604"
	I0417 19:10:25.315927  694161 addons.go:234] Setting addon storage-provisioner=true in "addons-873604"
	I0417 19:10:25.315965  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.316459  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.328653  694161 addons.go:69] Setting gcp-auth=true in profile "addons-873604"
	I0417 19:10:25.328704  694161 mustload.go:65] Loading cluster: addons-873604
	I0417 19:10:25.328872  694161 config.go:182] Loaded profile config "addons-873604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:10:25.329111  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.344718  694161 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-873604"
	I0417 19:10:25.344768  694161 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-873604"
	I0417 19:10:25.345062  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.356615  694161 addons.go:69] Setting ingress=true in profile "addons-873604"
	I0417 19:10:25.356657  694161 addons.go:234] Setting addon ingress=true in "addons-873604"
	I0417 19:10:25.356706  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.357122  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.358272  694161 addons.go:69] Setting volumesnapshots=true in profile "addons-873604"
	I0417 19:10:25.358309  694161 addons.go:234] Setting addon volumesnapshots=true in "addons-873604"
	I0417 19:10:25.358346  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.358761  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.275911  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.419473  694161 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0417 19:10:25.434724  694161 out.go:177]   - Using image docker.io/registry:2.8.3
	I0417 19:10:25.438643  694161 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0417 19:10:25.448318  694161 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0417 19:10:25.448352  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0417 19:10:25.448454  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.447769  694161 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0417 19:10:25.465667  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0417 19:10:25.465767  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.447780  694161 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0417 19:10:25.447785  694161 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0417 19:10:25.447789  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0417 19:10:25.447793  694161 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0417 19:10:25.495766  694161 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0417 19:10:25.496013  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.497486  694161 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-873604"
	I0417 19:10:25.502462  694161 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0417 19:10:25.502521  694161 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0417 19:10:25.502552  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0417 19:10:25.506059  694161 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0417 19:10:25.507009  694161 addons.go:234] Setting addon default-storageclass=true in "addons-873604"
	I0417 19:10:25.507244  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.507761  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.516591  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0417 19:10:25.512264  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:25.512313  694161 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 19:10:25.514168  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0417 19:10:25.514240  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.514247  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0417 19:10:25.521625  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0417 19:10:25.522054  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:25.524967  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0417 19:10:25.525182  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.530440  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.530535  694161 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0417 19:10:25.534095  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0417 19:10:25.544617  694161 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 19:10:25.554427  694161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0417 19:10:25.554687  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0417 19:10:25.556878  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.560046  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0417 19:10:25.564658  694161 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0417 19:10:25.564746  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.573780  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0417 19:10:25.573861  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.593261  694161 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0417 19:10:25.593287  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0417 19:10:25.593350  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.616636  694161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0417 19:10:25.615536  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.615757  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.616995  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0417 19:10:25.626002  694161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0417 19:10:25.627945  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0417 19:10:25.629714  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0417 19:10:25.628421  694161 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0417 19:10:25.634062  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0417 19:10:25.634192  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.647069  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0417 19:10:25.649681  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0417 19:10:25.655101  694161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0417 19:10:25.657659  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0417 19:10:25.657736  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0417 19:10:25.657834  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.686001  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.690121  694161 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0417 19:10:25.690148  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0417 19:10:25.690210  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.717001  694161 out.go:177]   - Using image docker.io/busybox:stable
	I0417 19:10:25.723299  694161 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0417 19:10:25.725491  694161 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0417 19:10:25.725516  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0417 19:10:25.725707  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:25.740591  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.772336  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.776824  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.782712  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.806812  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.809023  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.809865  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.828688  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.833988  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:25.841930  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:26.074562  694161 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0417 19:10:26.074589  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0417 19:10:26.076512  694161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:10:26.185738  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0417 19:10:26.197877  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0417 19:10:26.205301  694161 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0417 19:10:26.205325  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0417 19:10:26.247789  694161 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0417 19:10:26.247864  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0417 19:10:26.293716  694161 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0417 19:10:26.293781  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0417 19:10:26.318344  694161 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0417 19:10:26.318412  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0417 19:10:26.324349  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0417 19:10:26.347358  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 19:10:26.347916  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0417 19:10:26.352561  694161 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0417 19:10:26.352633  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0417 19:10:26.395127  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0417 19:10:26.409508  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0417 19:10:26.431620  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0417 19:10:26.431687  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0417 19:10:26.435088  694161 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0417 19:10:26.435157  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0417 19:10:26.440535  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0417 19:10:26.494929  694161 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0417 19:10:26.495004  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0417 19:10:26.504252  694161 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0417 19:10:26.504321  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0417 19:10:26.518486  694161 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0417 19:10:26.518570  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0417 19:10:26.613502  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0417 19:10:26.613575  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0417 19:10:26.616909  694161 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0417 19:10:26.616987  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0417 19:10:26.700260  694161 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0417 19:10:26.700515  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0417 19:10:26.700494  694161 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0417 19:10:26.700607  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0417 19:10:26.718739  694161 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0417 19:10:26.718808  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0417 19:10:26.753135  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0417 19:10:26.753201  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0417 19:10:26.762415  694161 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0417 19:10:26.762481  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0417 19:10:26.855594  694161 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0417 19:10:26.855659  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0417 19:10:26.888738  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0417 19:10:26.888827  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0417 19:10:26.889018  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0417 19:10:26.926696  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0417 19:10:26.926782  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0417 19:10:26.945001  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0417 19:10:26.958707  694161 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0417 19:10:26.958780  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0417 19:10:26.985547  694161 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0417 19:10:26.985620  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0417 19:10:27.030655  694161 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0417 19:10:27.030733  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0417 19:10:27.033941  694161 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0417 19:10:27.034020  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0417 19:10:27.074058  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0417 19:10:27.145612  694161 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0417 19:10:27.145677  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0417 19:10:27.146006  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0417 19:10:27.146046  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0417 19:10:27.204337  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0417 19:10:27.251630  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0417 19:10:27.251696  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0417 19:10:27.414564  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0417 19:10:27.414644  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0417 19:10:27.544801  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0417 19:10:27.544866  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0417 19:10:27.689791  694161 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0417 19:10:27.689823  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0417 19:10:27.840076  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0417 19:10:28.633453  694161 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.008876091s)
	I0417 19:10:28.633490  694161 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0417 19:10:28.634501  694161 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.557958799s)
	I0417 19:10:28.635196  694161 node_ready.go:35] waiting up to 6m0s for node "addons-873604" to be "Ready" ...
	I0417 19:10:29.166668  694161 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-873604" context rescaled to 1 replicas
	I0417 19:10:29.353382  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.167610217s)
	I0417 19:10:29.683820  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.485861649s)
	I0417 19:10:30.664201  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:31.355545  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.031085839s)
	I0417 19:10:31.356024  694161 addons.go:470] Verifying addon ingress=true in "addons-873604"
	I0417 19:10:31.358110  694161 out.go:177] * Verifying ingress addon...
	I0417 19:10:31.355763  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.008368405s)
	I0417 19:10:31.355813  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.007853621s)
	I0417 19:10:31.355832  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.960683693s)
	I0417 19:10:31.355849  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.946283273s)
	I0417 19:10:31.355888  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.915290989s)
	I0417 19:10:31.355934  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.466882199s)
	I0417 19:10:31.355971  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.410904382s)
	I0417 19:10:31.363011  694161 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-873604 service yakd-dashboard -n yakd-dashboard
	
	I0417 19:10:31.361493  694161 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0417 19:10:31.361662  694161 addons.go:470] Verifying addon registry=true in "addons-873604"
	I0417 19:10:31.361678  694161 addons.go:470] Verifying addon metrics-server=true in "addons-873604"
	I0417 19:10:31.367392  694161 out.go:177] * Verifying registry addon...
	I0417 19:10:31.371128  694161 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0417 19:10:31.393288  694161 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0417 19:10:31.421529  694161 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0417 19:10:31.421558  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:31.422113  694161 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0417 19:10:31.422159  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:31.456565  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.382402382s)
	W0417 19:10:31.456601  694161 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0417 19:10:31.456624  694161 retry.go:31] will retry after 304.18179ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0417 19:10:31.456695  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.252245288s)
	I0417 19:10:31.723649  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.883526582s)
	I0417 19:10:31.723750  694161 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-873604"
	I0417 19:10:31.727409  694161 out.go:177] * Verifying csi-hostpath-driver addon...
	I0417 19:10:31.730837  694161 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0417 19:10:31.761886  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0417 19:10:31.769910  694161 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0417 19:10:31.769937  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:31.870305  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:31.884834  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:32.235388  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:32.369411  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:32.382452  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:32.741173  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:32.868971  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:32.877267  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:33.138965  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:33.235662  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:33.369880  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:33.376814  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:33.747531  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:33.866287  694161 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0417 19:10:33.866365  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:33.881308  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:33.901922  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:33.911544  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:34.090387  694161 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0417 19:10:34.129054  694161 addons.go:234] Setting addon gcp-auth=true in "addons-873604"
	I0417 19:10:34.129106  694161 host.go:66] Checking if "addons-873604" exists ...
	I0417 19:10:34.129544  694161 cli_runner.go:164] Run: docker container inspect addons-873604 --format={{.State.Status}}
	I0417 19:10:34.146111  694161 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0417 19:10:34.146166  694161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-873604
	I0417 19:10:34.184643  694161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/addons-873604/id_rsa Username:docker}
	I0417 19:10:34.242748  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:34.370505  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:34.375460  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:34.741337  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:34.869880  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:34.875462  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:34.901369  694161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.139432601s)
	I0417 19:10:34.904202  694161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0417 19:10:34.906842  694161 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0417 19:10:34.909067  694161 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0417 19:10:34.909094  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0417 19:10:34.943154  694161 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0417 19:10:34.943181  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0417 19:10:34.970994  694161 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0417 19:10:34.971028  694161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0417 19:10:34.994175  694161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0417 19:10:35.236043  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:35.369854  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:35.376202  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:35.669273  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:35.708575  694161 addons.go:470] Verifying addon gcp-auth=true in "addons-873604"
	I0417 19:10:35.710859  694161 out.go:177] * Verifying gcp-auth addon...
	I0417 19:10:35.714416  694161 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0417 19:10:35.726369  694161 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0417 19:10:35.726402  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:35.737043  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:35.870349  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:35.875860  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:36.219485  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:36.237764  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:36.370223  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:36.378510  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:36.725348  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:36.741277  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:36.870492  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:36.875817  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:37.222461  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:37.235817  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:37.369787  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:37.375672  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:37.717842  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:37.740759  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:37.869888  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:37.876592  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:38.138861  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:38.225001  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:38.238824  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:38.371766  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:38.377432  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:38.718765  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:38.744321  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:38.874671  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:38.880120  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:39.241812  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:39.257757  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:39.371224  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:39.376249  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:39.717996  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:39.738323  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:39.870734  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:39.875623  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:40.139051  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:40.218201  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:40.246904  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:40.369716  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:40.375729  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:40.717954  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:40.736144  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:40.869500  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:40.875173  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:41.218470  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:41.243127  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:41.370231  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:41.375902  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:41.718717  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:41.738493  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:41.869720  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:41.875470  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:42.144611  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:42.219098  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:42.244859  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:42.370993  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:42.376710  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:42.718338  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:42.741738  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:42.869755  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:42.875782  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:43.218545  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:43.235532  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:43.369652  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:43.375983  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:43.718024  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:43.735764  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:43.869481  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:43.875400  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:44.218908  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:44.235079  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:44.369604  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:44.375463  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:44.639044  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:44.717939  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:44.735659  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:44.869819  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:44.875686  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:45.219433  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:45.239557  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:45.371014  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:45.375903  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:45.718534  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:45.736041  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:45.869520  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:45.875728  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:46.217824  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:46.236030  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:46.369776  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:46.375147  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:46.639153  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:46.718101  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:46.736301  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:46.870011  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:46.874957  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:47.218218  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:47.236112  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:47.369860  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:47.375589  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:47.718185  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:47.735878  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:47.869210  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:47.875185  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:48.218659  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:48.235815  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:48.370102  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:48.375387  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:48.717887  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:48.740697  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:48.869112  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:48.874946  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:49.138984  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:49.218420  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:49.237041  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:49.369614  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:49.375446  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:49.718027  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:49.735523  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:49.869621  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:49.875871  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:50.218195  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:50.235394  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:50.368854  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:50.375640  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:50.718051  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:50.742698  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:50.869911  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:50.874802  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:51.141569  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:51.217814  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:51.235260  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:51.369930  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:51.376075  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:51.718403  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:51.735753  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:51.869875  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:51.875881  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:52.218587  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:52.235946  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:52.370301  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:52.374995  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:52.717822  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:52.735306  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:52.869458  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:52.875524  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:53.218284  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:53.234965  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:53.369560  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:53.375331  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:53.638660  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:53.718059  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:53.736916  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:53.869022  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:53.874600  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:54.218310  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:54.236412  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:54.369596  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:54.379677  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:54.718184  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:54.736434  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:54.869969  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:54.874908  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:55.218201  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:55.237492  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:55.371780  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:55.375377  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:55.639952  694161 node_ready.go:53] node "addons-873604" has status "Ready":"False"
	I0417 19:10:55.718402  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:55.735774  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:55.870137  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:55.876303  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:56.218478  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:56.236205  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:56.369275  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:56.375044  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:56.718322  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:56.736029  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:56.869783  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:56.875502  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:57.218460  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:57.235190  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:57.369537  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:57.375036  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:57.717728  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:57.741519  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:57.869492  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:57.875018  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:58.193982  694161 node_ready.go:49] node "addons-873604" has status "Ready":"True"
	I0417 19:10:58.194009  694161 node_ready.go:38] duration metric: took 29.558788318s for node "addons-873604" to be "Ready" ...
	I0417 19:10:58.194020  694161 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:10:58.219989  694161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tf89r" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:58.230190  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:58.242523  694161 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0417 19:10:58.242551  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:58.453219  694161 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0417 19:10:58.453244  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:58.459654  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:58.720215  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:58.749642  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:58.905939  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:58.910002  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:59.218682  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:59.241589  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:59.371419  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:59.375812  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:10:59.719430  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:10:59.750563  694161 pod_ready.go:92] pod "coredns-7db6d8ff4d-tf89r" in "kube-system" namespace has status "Ready":"True"
	I0417 19:10:59.750638  694161 pod_ready.go:81] duration metric: took 1.530606485s for pod "coredns-7db6d8ff4d-tf89r" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.750676  694161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.760738  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:10:59.772853  694161 pod_ready.go:92] pod "etcd-addons-873604" in "kube-system" namespace has status "Ready":"True"
	I0417 19:10:59.772939  694161 pod_ready.go:81] duration metric: took 22.232651ms for pod "etcd-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.772982  694161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.781220  694161 pod_ready.go:92] pod "kube-apiserver-addons-873604" in "kube-system" namespace has status "Ready":"True"
	I0417 19:10:59.781286  694161 pod_ready.go:81] duration metric: took 8.281225ms for pod "kube-apiserver-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.781314  694161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.790065  694161 pod_ready.go:92] pod "kube-controller-manager-addons-873604" in "kube-system" namespace has status "Ready":"True"
	I0417 19:10:59.790136  694161 pod_ready.go:81] duration metric: took 8.801893ms for pod "kube-controller-manager-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.790167  694161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zcxl8" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.798101  694161 pod_ready.go:92] pod "kube-proxy-zcxl8" in "kube-system" namespace has status "Ready":"True"
	I0417 19:10:59.798168  694161 pod_ready.go:81] duration metric: took 7.981361ms for pod "kube-proxy-zcxl8" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.798195  694161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:10:59.870139  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:10:59.878728  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:00.142401  694161 pod_ready.go:92] pod "kube-scheduler-addons-873604" in "kube-system" namespace has status "Ready":"True"
	I0417 19:11:00.142491  694161 pod_ready.go:81] duration metric: took 344.272717ms for pod "kube-scheduler-addons-873604" in "kube-system" namespace to be "Ready" ...
	I0417 19:11:00.142521  694161 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace to be "Ready" ...
	I0417 19:11:00.227796  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:00.238580  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:00.372007  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:00.380066  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:00.722375  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:00.753135  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:00.869386  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:00.876005  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:01.218295  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:01.237169  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:01.370068  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:01.375354  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:01.718041  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:01.737786  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:01.891834  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:01.903192  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:02.148836  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:02.218827  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:02.236968  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:02.371716  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:02.378179  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:02.719972  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:02.736836  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:02.869447  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:02.875945  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:03.218552  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:03.237415  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:03.371858  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:03.377871  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:03.722136  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:03.737853  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:03.872749  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:03.880712  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:04.150468  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:04.218259  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:04.238582  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:04.370947  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:04.376649  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:04.718729  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:04.737561  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:04.869349  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:04.876110  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:05.220261  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:05.236778  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:05.378553  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:05.378768  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:05.718695  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:05.740079  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:05.870137  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:05.875792  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:06.150773  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:06.218462  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:06.238256  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:06.369807  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:06.377426  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:06.718603  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:06.741723  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:06.871607  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:06.879159  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:07.222323  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:07.240218  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:07.376998  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:07.380954  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:07.718172  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:07.738377  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:07.870599  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:07.875837  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:08.218036  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:08.236735  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:08.382356  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:08.389373  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:08.650078  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:08.718935  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:08.739064  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:08.871648  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:08.910720  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:09.218824  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:09.237055  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:09.371332  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:09.377805  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:09.718698  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:09.742366  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:09.869953  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:09.877683  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:10.224551  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:10.249433  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:10.372135  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:10.380415  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:10.719590  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:10.742216  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:10.870793  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:10.877309  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:11.149651  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:11.219175  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:11.238836  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:11.371144  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:11.376428  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:11.720002  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:11.740368  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:11.872181  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:11.888635  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:12.219814  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:12.241282  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:12.370200  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:12.376407  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:12.718721  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:12.740118  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:12.871105  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:12.876231  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:13.218848  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:13.236893  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:13.369915  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:13.376112  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 19:11:13.653763  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:13.718924  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:13.737517  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:13.886278  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:13.895271  694161 kapi.go:107] duration metric: took 42.524139256s to wait for kubernetes.io/minikube-addons=registry ...
	I0417 19:11:14.223223  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:14.238887  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:14.371817  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:14.718137  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:14.740228  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:14.870261  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:15.219817  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:15.238023  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:15.371453  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:15.720488  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:15.739108  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:15.872775  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:16.152651  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:16.235876  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:16.254732  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:16.374869  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:16.728724  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:16.771056  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:16.888550  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:17.226920  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:17.239956  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:17.370111  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:17.718273  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:17.748220  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:17.872756  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:18.153206  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:18.220492  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:18.239230  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:18.372003  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:18.721435  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:18.760309  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:18.871903  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:19.218808  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:19.238035  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:19.371667  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:19.718281  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:19.743463  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:19.870338  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:20.219031  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:20.237181  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:20.370688  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:20.654549  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:20.719437  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:20.751637  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:20.870879  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:21.220109  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:21.239047  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:21.370636  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:21.718946  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:21.741844  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:21.870909  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:22.242466  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:22.249024  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:22.370372  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:22.722411  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:22.753462  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:22.869380  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:23.160041  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:23.218705  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:23.237200  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:23.369904  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:23.718518  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:23.737766  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:23.869369  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:24.220204  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:24.238505  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:24.373666  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:24.719587  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:24.737987  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:24.869967  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:25.219125  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:25.239295  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:25.370188  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:25.649597  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:25.717825  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:25.740871  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:25.870722  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:26.218908  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:26.237074  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:26.369559  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:26.718015  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:26.742363  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:26.869507  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:27.219220  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:27.236965  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:27.371839  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:27.653655  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:27.718434  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:27.741889  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:27.869166  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:28.218914  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:28.238449  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:28.370624  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:28.718931  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:28.745678  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:28.870336  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:29.218350  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:29.237827  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:29.369637  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:29.721178  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:29.738533  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:29.871052  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:30.154212  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:30.219280  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:30.236493  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:30.374199  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:30.718520  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:30.744719  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:30.870260  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:31.219234  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:31.236794  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:31.371048  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:31.718349  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:31.738516  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:31.870446  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:32.220114  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:32.239046  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:32.371459  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:32.649375  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:32.720129  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:32.749330  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:32.869470  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:33.218994  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:33.236895  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:33.376533  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:33.721459  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:33.750444  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:33.870800  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:34.218241  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:34.236673  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:34.371325  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:34.649682  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:34.719428  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:34.741178  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:34.869184  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:35.218402  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:35.236244  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:35.369472  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:35.719798  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:35.750522  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:35.870451  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:36.218924  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:36.237504  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:36.373065  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:36.718683  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:36.742382  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:36.870620  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:37.150017  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:37.218810  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:37.236196  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:37.369180  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:37.718066  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:37.741745  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:37.869300  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:38.218084  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:38.237553  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:38.370416  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:38.718226  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:38.737085  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:38.870884  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:39.151042  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:39.218633  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:39.236507  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:39.371347  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:39.718326  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:39.738235  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:39.869307  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:40.218803  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:40.237001  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:40.369375  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:40.718725  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:40.742341  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:40.869892  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:41.218119  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:41.240500  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:41.373725  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:41.650417  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:41.717699  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:41.737890  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:41.869495  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:42.226067  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:42.237543  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:42.374556  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:42.718976  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:42.745030  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:42.871186  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:43.219028  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:43.238084  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:43.370843  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:43.651813  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:43.718101  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:43.751490  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:43.871234  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:44.218810  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:44.236551  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:44.371141  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:44.718270  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:44.737739  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:44.870636  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:45.223033  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:45.246938  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:45.386761  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:45.718293  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:45.751686  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:45.870583  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:46.148729  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:46.218333  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:46.236675  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:46.370135  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:46.726332  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:46.743102  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:46.874631  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:47.219761  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:47.236920  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:47.371118  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:47.718738  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:47.752629  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:47.870320  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:48.149702  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:48.218199  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:48.237288  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:48.371836  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:48.724142  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 19:11:48.746300  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:48.869645  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:49.218612  694161 kapi.go:107] duration metric: took 1m13.504195307s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0417 19:11:49.220844  694161 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-873604 cluster.
	I0417 19:11:49.222830  694161 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0417 19:11:49.225223  694161 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0417 19:11:49.247040  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:49.370141  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:49.745956  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:49.869404  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:50.152375  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:50.246453  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:50.370212  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:50.744501  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:50.871108  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:51.237122  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:51.370542  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:51.739926  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:51.870413  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:52.237932  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:52.372597  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:52.650827  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:52.752841  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:52.869858  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:53.238097  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:53.369493  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:53.738440  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:53.869877  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:54.238327  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:54.378629  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:54.654021  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:54.742588  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:54.871247  694161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 19:11:55.236375  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:55.370578  694161 kapi.go:107] duration metric: took 1m24.009088193s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0417 19:11:55.756441  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:56.237089  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:56.738219  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:57.152018  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:57.236811  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:57.747232  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:58.237468  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:58.744977  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:59.236482  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:11:59.648673  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:11:59.738136  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:00.317239  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:00.745916  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:01.239124  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:01.649699  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:01.746095  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:02.236889  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:02.741611  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:03.239120  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:03.651329  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:03.745174  694161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 19:12:04.237053  694161 kapi.go:107] duration metric: took 1m32.50621535s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0417 19:12:04.240443  694161 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0417 19:12:04.242331  694161 addons.go:505] duration metric: took 1m38.971457593s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0417 19:12:06.149840  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:08.648808  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:10.649204  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:13.148836  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:15.149714  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:17.150184  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:19.648638  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:21.649539  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:24.149824  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:26.155030  694161 pod_ready.go:102] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"False"
	I0417 19:12:28.149378  694161 pod_ready.go:92] pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace has status "Ready":"True"
	I0417 19:12:28.149404  694161 pod_ready.go:81] duration metric: took 1m28.006859378s for pod "metrics-server-c59844bb4-q7zp5" in "kube-system" namespace to be "Ready" ...
	I0417 19:12:28.149417  694161 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6lc6l" in "kube-system" namespace to be "Ready" ...
	I0417 19:12:28.154641  694161 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-6lc6l" in "kube-system" namespace has status "Ready":"True"
	I0417 19:12:28.154665  694161 pod_ready.go:81] duration metric: took 5.240053ms for pod "nvidia-device-plugin-daemonset-6lc6l" in "kube-system" namespace to be "Ready" ...
	I0417 19:12:28.154717  694161 pod_ready.go:38] duration metric: took 1m29.960629996s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:12:28.154738  694161 api_server.go:52] waiting for apiserver process to appear ...
	I0417 19:12:28.154789  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0417 19:12:28.154864  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0417 19:12:28.215121  694161 cri.go:89] found id: "e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:28.215144  694161 cri.go:89] found id: ""
	I0417 19:12:28.215160  694161 logs.go:276] 1 containers: [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211]
	I0417 19:12:28.215222  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.219199  694161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0417 19:12:28.219271  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0417 19:12:28.257376  694161 cri.go:89] found id: "601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:28.257396  694161 cri.go:89] found id: ""
	I0417 19:12:28.257404  694161 logs.go:276] 1 containers: [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1]
	I0417 19:12:28.257462  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.260955  694161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0417 19:12:28.261031  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0417 19:12:28.306001  694161 cri.go:89] found id: "07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:28.306027  694161 cri.go:89] found id: ""
	I0417 19:12:28.306035  694161 logs.go:276] 1 containers: [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311]
	I0417 19:12:28.306115  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.309905  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0417 19:12:28.310025  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0417 19:12:28.351814  694161 cri.go:89] found id: "e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:28.351842  694161 cri.go:89] found id: ""
	I0417 19:12:28.351850  694161 logs.go:276] 1 containers: [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76]
	I0417 19:12:28.351914  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.355525  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0417 19:12:28.355608  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0417 19:12:28.414372  694161 cri.go:89] found id: "86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:28.414393  694161 cri.go:89] found id: ""
	I0417 19:12:28.414402  694161 logs.go:276] 1 containers: [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17]
	I0417 19:12:28.414459  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.418031  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0417 19:12:28.418104  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0417 19:12:28.456737  694161 cri.go:89] found id: "97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:28.456815  694161 cri.go:89] found id: ""
	I0417 19:12:28.456836  694161 logs.go:276] 1 containers: [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458]
	I0417 19:12:28.456938  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.460493  694161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0417 19:12:28.460561  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0417 19:12:28.505567  694161 cri.go:89] found id: "fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:28.505598  694161 cri.go:89] found id: ""
	I0417 19:12:28.505606  694161 logs.go:276] 1 containers: [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6]
	I0417 19:12:28.505663  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:28.509220  694161 logs.go:123] Gathering logs for etcd [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1] ...
	I0417 19:12:28.509245  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:28.562732  694161 logs.go:123] Gathering logs for coredns [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311] ...
	I0417 19:12:28.562804  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:28.606981  694161 logs.go:123] Gathering logs for kube-scheduler [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76] ...
	I0417 19:12:28.607017  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:28.656922  694161 logs.go:123] Gathering logs for container status ...
	I0417 19:12:28.656955  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0417 19:12:28.706497  694161 logs.go:123] Gathering logs for kubelet ...
	I0417 19:12:28.706528  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0417 19:12:28.765045  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985411    1495 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.765268  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985465    1495 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.765449  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985517    1495 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.765649  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985530    1495 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.765813  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985589    1495 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.765994  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.766177  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.766382  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.766565  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:28.766772  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:28.804099  694161 logs.go:123] Gathering logs for describe nodes ...
	I0417 19:12:28.805095  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0417 19:12:28.981173  694161 logs.go:123] Gathering logs for kube-apiserver [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211] ...
	I0417 19:12:28.981206  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:29.069695  694161 logs.go:123] Gathering logs for kube-proxy [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17] ...
	I0417 19:12:29.069737  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:29.108940  694161 logs.go:123] Gathering logs for kube-controller-manager [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458] ...
	I0417 19:12:29.108969  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:29.194463  694161 logs.go:123] Gathering logs for kindnet [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6] ...
	I0417 19:12:29.194498  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:29.233703  694161 logs.go:123] Gathering logs for CRI-O ...
	I0417 19:12:29.233793  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0417 19:12:29.331263  694161 logs.go:123] Gathering logs for dmesg ...
	I0417 19:12:29.331303  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0417 19:12:29.350417  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:29.350446  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0417 19:12:29.350495  694161 out.go:239] X Problems detected in kubelet:
	W0417 19:12:29.350510  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:29.350522  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:29.350534  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:29.350542  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:29.350552  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:29.350559  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:29.350570  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:12:39.351331  694161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:12:39.368434  694161 api_server.go:72] duration metric: took 2m14.097783195s to wait for apiserver process to appear ...
	I0417 19:12:39.368458  694161 api_server.go:88] waiting for apiserver healthz status ...
	I0417 19:12:39.368492  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0417 19:12:39.368560  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0417 19:12:39.409853  694161 cri.go:89] found id: "e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:39.409873  694161 cri.go:89] found id: ""
	I0417 19:12:39.409881  694161 logs.go:276] 1 containers: [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211]
	I0417 19:12:39.409937  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.413569  694161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0417 19:12:39.413643  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0417 19:12:39.452693  694161 cri.go:89] found id: "601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:39.452717  694161 cri.go:89] found id: ""
	I0417 19:12:39.452725  694161 logs.go:276] 1 containers: [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1]
	I0417 19:12:39.452779  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.456270  694161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0417 19:12:39.456343  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0417 19:12:39.499495  694161 cri.go:89] found id: "07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:39.499516  694161 cri.go:89] found id: ""
	I0417 19:12:39.499524  694161 logs.go:276] 1 containers: [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311]
	I0417 19:12:39.499579  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.504195  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0417 19:12:39.504264  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0417 19:12:39.545852  694161 cri.go:89] found id: "e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:39.545875  694161 cri.go:89] found id: ""
	I0417 19:12:39.545883  694161 logs.go:276] 1 containers: [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76]
	I0417 19:12:39.545943  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.549688  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0417 19:12:39.549763  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0417 19:12:39.591672  694161 cri.go:89] found id: "86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:39.591695  694161 cri.go:89] found id: ""
	I0417 19:12:39.591703  694161 logs.go:276] 1 containers: [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17]
	I0417 19:12:39.591760  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.595500  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0417 19:12:39.595585  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0417 19:12:39.633385  694161 cri.go:89] found id: "97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:39.633407  694161 cri.go:89] found id: ""
	I0417 19:12:39.633415  694161 logs.go:276] 1 containers: [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458]
	I0417 19:12:39.633471  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.637028  694161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0417 19:12:39.637104  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0417 19:12:39.676483  694161 cri.go:89] found id: "fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:39.676565  694161 cri.go:89] found id: ""
	I0417 19:12:39.676581  694161 logs.go:276] 1 containers: [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6]
	I0417 19:12:39.676640  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:39.680313  694161 logs.go:123] Gathering logs for kubelet ...
	I0417 19:12:39.680340  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0417 19:12:39.735678  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985411    1495 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.735909  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985465    1495 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.736092  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985517    1495 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.736291  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985530    1495 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.736476  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985589    1495 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.736681  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.736876  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.737081  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.737266  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:39.737472  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:39.775926  694161 logs.go:123] Gathering logs for describe nodes ...
	I0417 19:12:39.775958  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0417 19:12:39.911859  694161 logs.go:123] Gathering logs for coredns [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311] ...
	I0417 19:12:39.911891  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:39.956547  694161 logs.go:123] Gathering logs for kube-controller-manager [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458] ...
	I0417 19:12:39.956577  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:40.025096  694161 logs.go:123] Gathering logs for container status ...
	I0417 19:12:40.025144  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0417 19:12:40.105644  694161 logs.go:123] Gathering logs for CRI-O ...
	I0417 19:12:40.105682  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0417 19:12:40.210702  694161 logs.go:123] Gathering logs for dmesg ...
	I0417 19:12:40.210745  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0417 19:12:40.230875  694161 logs.go:123] Gathering logs for kube-apiserver [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211] ...
	I0417 19:12:40.230912  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:40.301628  694161 logs.go:123] Gathering logs for etcd [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1] ...
	I0417 19:12:40.301660  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:40.361420  694161 logs.go:123] Gathering logs for kube-scheduler [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76] ...
	I0417 19:12:40.361462  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:40.411255  694161 logs.go:123] Gathering logs for kube-proxy [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17] ...
	I0417 19:12:40.411289  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:40.451586  694161 logs.go:123] Gathering logs for kindnet [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6] ...
	I0417 19:12:40.451617  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:40.494642  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:40.494676  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0417 19:12:40.494782  694161 out.go:239] X Problems detected in kubelet:
	W0417 19:12:40.494827  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:40.494855  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:40.494863  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:40.494870  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:40.494881  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:40.494887  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:40.494893  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:12:50.496684  694161 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0417 19:12:50.504299  694161 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0417 19:12:50.505439  694161 api_server.go:141] control plane version: v1.30.0-rc.2
	I0417 19:12:50.505468  694161 api_server.go:131] duration metric: took 11.136999618s to wait for apiserver health ...
	I0417 19:12:50.505478  694161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0417 19:12:50.505499  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0417 19:12:50.505560  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0417 19:12:50.551940  694161 cri.go:89] found id: "e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:50.551964  694161 cri.go:89] found id: ""
	I0417 19:12:50.551972  694161 logs.go:276] 1 containers: [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211]
	I0417 19:12:50.552042  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.555833  694161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0417 19:12:50.555936  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0417 19:12:50.598297  694161 cri.go:89] found id: "601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:50.598320  694161 cri.go:89] found id: ""
	I0417 19:12:50.598328  694161 logs.go:276] 1 containers: [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1]
	I0417 19:12:50.598391  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.602101  694161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0417 19:12:50.602175  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0417 19:12:50.642793  694161 cri.go:89] found id: "07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:50.642813  694161 cri.go:89] found id: ""
	I0417 19:12:50.642821  694161 logs.go:276] 1 containers: [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311]
	I0417 19:12:50.642875  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.646409  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0417 19:12:50.646502  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0417 19:12:50.688600  694161 cri.go:89] found id: "e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:50.688621  694161 cri.go:89] found id: ""
	I0417 19:12:50.688629  694161 logs.go:276] 1 containers: [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76]
	I0417 19:12:50.688704  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.692293  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0417 19:12:50.692374  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0417 19:12:50.734228  694161 cri.go:89] found id: "86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:50.734253  694161 cri.go:89] found id: ""
	I0417 19:12:50.734261  694161 logs.go:276] 1 containers: [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17]
	I0417 19:12:50.734351  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.743726  694161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0417 19:12:50.743815  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0417 19:12:50.786457  694161 cri.go:89] found id: "97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:50.786480  694161 cri.go:89] found id: ""
	I0417 19:12:50.786487  694161 logs.go:276] 1 containers: [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458]
	I0417 19:12:50.786572  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.790301  694161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0417 19:12:50.790394  694161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0417 19:12:50.840094  694161 cri.go:89] found id: "fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:50.840169  694161 cri.go:89] found id: ""
	I0417 19:12:50.840192  694161 logs.go:276] 1 containers: [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6]
	I0417 19:12:50.840272  694161 ssh_runner.go:195] Run: which crictl
	I0417 19:12:50.844437  694161 logs.go:123] Gathering logs for container status ...
	I0417 19:12:50.844508  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0417 19:12:50.907645  694161 logs.go:123] Gathering logs for etcd [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1] ...
	I0417 19:12:50.907675  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1"
	I0417 19:12:50.958811  694161 logs.go:123] Gathering logs for coredns [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311] ...
	I0417 19:12:50.958845  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311"
	I0417 19:12:51.014483  694161 logs.go:123] Gathering logs for kube-scheduler [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76] ...
	I0417 19:12:51.014523  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76"
	I0417 19:12:51.069807  694161 logs.go:123] Gathering logs for kube-proxy [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17] ...
	I0417 19:12:51.069839  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17"
	I0417 19:12:51.120036  694161 logs.go:123] Gathering logs for kube-controller-manager [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458] ...
	I0417 19:12:51.120068  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458"
	I0417 19:12:51.196076  694161 logs.go:123] Gathering logs for kindnet [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6] ...
	I0417 19:12:51.196112  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6"
	I0417 19:12:51.239098  694161 logs.go:123] Gathering logs for kubelet ...
	I0417 19:12:51.239130  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0417 19:12:51.296063  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985411    1495 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.296320  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985465    1495 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.296511  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985517    1495 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.296716  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985530    1495 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.296888  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.985589    1495 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.297072  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.297261  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.297468  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.297656  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.297865  694161 logs.go:138] Found kubelet problem: Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:51.337800  694161 logs.go:123] Gathering logs for dmesg ...
	I0417 19:12:51.337832  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0417 19:12:51.356602  694161 logs.go:123] Gathering logs for describe nodes ...
	I0417 19:12:51.356634  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0417 19:12:51.495718  694161 logs.go:123] Gathering logs for kube-apiserver [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211] ...
	I0417 19:12:51.495749  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211"
	I0417 19:12:51.563465  694161 logs.go:123] Gathering logs for CRI-O ...
	I0417 19:12:51.563540  694161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0417 19:12:51.657423  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:51.657454  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0417 19:12:51.657514  694161 out.go:239] X Problems detected in kubelet:
	W0417 19:12:51.657528  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.985601    1495 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-873604" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.657536  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988363    1495 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.657547  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988460    1495 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.657555  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: W0417 19:10:57.988523    1495 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	W0417 19:12:51.657565  694161 out.go:239]   Apr 17 19:10:57 addons-873604 kubelet[1495]: E0417 19:10:57.988539    1495 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-873604" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-873604' and this object
	I0417 19:12:51.657572  694161 out.go:304] Setting ErrFile to fd 2...
	I0417 19:12:51.657583  694161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:13:01.671954  694161 system_pods.go:59] 18 kube-system pods found
	I0417 19:13:01.672013  694161 system_pods.go:61] "coredns-7db6d8ff4d-tf89r" [6d50a17c-d030-491f-a9e7-344e52ca2e43] Running
	I0417 19:13:01.672028  694161 system_pods.go:61] "csi-hostpath-attacher-0" [5ed8350d-6f82-4a99-81fb-acce4f44903e] Running
	I0417 19:13:01.672032  694161 system_pods.go:61] "csi-hostpath-resizer-0" [a094ae73-5cba-4d9f-8f80-f92b7b371c55] Running
	I0417 19:13:01.672037  694161 system_pods.go:61] "csi-hostpathplugin-28wcl" [9513b917-9c97-4e7d-a58c-68fcdb52eadc] Running
	I0417 19:13:01.672041  694161 system_pods.go:61] "etcd-addons-873604" [4d8bd5d0-ff8e-46c2-95b2-370af1fdf8ee] Running
	I0417 19:13:01.672052  694161 system_pods.go:61] "kindnet-xrsgr" [c915c17a-d1ae-404f-a25a-93e517bf7ff9] Running
	I0417 19:13:01.672057  694161 system_pods.go:61] "kube-apiserver-addons-873604" [5d12b02b-b639-4306-b01e-621e0adff821] Running
	I0417 19:13:01.672073  694161 system_pods.go:61] "kube-controller-manager-addons-873604" [8c8ebc95-425b-4347-81b1-9a01e1e106e7] Running
	I0417 19:13:01.672082  694161 system_pods.go:61] "kube-ingress-dns-minikube" [b4ebfb39-7e93-4561-9442-16bc8af64c70] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0417 19:13:01.672091  694161 system_pods.go:61] "kube-proxy-zcxl8" [ee652fb9-719b-460e-becc-f9c35909409c] Running
	I0417 19:13:01.672097  694161 system_pods.go:61] "kube-scheduler-addons-873604" [1493bd44-030a-408f-b7d8-6f60ae22987d] Running
	I0417 19:13:01.672102  694161 system_pods.go:61] "metrics-server-c59844bb4-q7zp5" [da8a5501-6baf-4977-905c-f81fe98110e2] Running
	I0417 19:13:01.672113  694161 system_pods.go:61] "nvidia-device-plugin-daemonset-6lc6l" [88c0cead-b0d0-4699-b183-dab722233906] Running
	I0417 19:13:01.672117  694161 system_pods.go:61] "registry-hlj26" [bd1989fe-0b5a-41a4-ae03-88af2d34eb0d] Running
	I0417 19:13:01.672121  694161 system_pods.go:61] "registry-proxy-qwqgq" [e06390fb-d1dc-4627-80a9-02edada26c01] Running
	I0417 19:13:01.672125  694161 system_pods.go:61] "snapshot-controller-745499f584-4wzw2" [4eb7bf86-05b3-4e06-83e6-05e94dd20f58] Running
	I0417 19:13:01.672129  694161 system_pods.go:61] "snapshot-controller-745499f584-j78nn" [84496916-d4e7-4a9b-b8e3-dce36db8163d] Running
	I0417 19:13:01.672136  694161 system_pods.go:61] "storage-provisioner" [71a1577e-c751-48be-b51e-ae0981fefa0b] Running
	I0417 19:13:01.672142  694161 system_pods.go:74] duration metric: took 11.166658539s to wait for pod list to return data ...
	I0417 19:13:01.672154  694161 default_sa.go:34] waiting for default service account to be created ...
	I0417 19:13:01.674721  694161 default_sa.go:45] found service account: "default"
	I0417 19:13:01.674749  694161 default_sa.go:55] duration metric: took 2.588232ms for default service account to be created ...
	I0417 19:13:01.674760  694161 system_pods.go:116] waiting for k8s-apps to be running ...
	I0417 19:13:01.684828  694161 system_pods.go:86] 18 kube-system pods found
	I0417 19:13:01.684864  694161 system_pods.go:89] "coredns-7db6d8ff4d-tf89r" [6d50a17c-d030-491f-a9e7-344e52ca2e43] Running
	I0417 19:13:01.684871  694161 system_pods.go:89] "csi-hostpath-attacher-0" [5ed8350d-6f82-4a99-81fb-acce4f44903e] Running
	I0417 19:13:01.684876  694161 system_pods.go:89] "csi-hostpath-resizer-0" [a094ae73-5cba-4d9f-8f80-f92b7b371c55] Running
	I0417 19:13:01.684881  694161 system_pods.go:89] "csi-hostpathplugin-28wcl" [9513b917-9c97-4e7d-a58c-68fcdb52eadc] Running
	I0417 19:13:01.684886  694161 system_pods.go:89] "etcd-addons-873604" [4d8bd5d0-ff8e-46c2-95b2-370af1fdf8ee] Running
	I0417 19:13:01.684891  694161 system_pods.go:89] "kindnet-xrsgr" [c915c17a-d1ae-404f-a25a-93e517bf7ff9] Running
	I0417 19:13:01.684896  694161 system_pods.go:89] "kube-apiserver-addons-873604" [5d12b02b-b639-4306-b01e-621e0adff821] Running
	I0417 19:13:01.684901  694161 system_pods.go:89] "kube-controller-manager-addons-873604" [8c8ebc95-425b-4347-81b1-9a01e1e106e7] Running
	I0417 19:13:01.684909  694161 system_pods.go:89] "kube-ingress-dns-minikube" [b4ebfb39-7e93-4561-9442-16bc8af64c70] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0417 19:13:01.684921  694161 system_pods.go:89] "kube-proxy-zcxl8" [ee652fb9-719b-460e-becc-f9c35909409c] Running
	I0417 19:13:01.684930  694161 system_pods.go:89] "kube-scheduler-addons-873604" [1493bd44-030a-408f-b7d8-6f60ae22987d] Running
	I0417 19:13:01.684934  694161 system_pods.go:89] "metrics-server-c59844bb4-q7zp5" [da8a5501-6baf-4977-905c-f81fe98110e2] Running
	I0417 19:13:01.684939  694161 system_pods.go:89] "nvidia-device-plugin-daemonset-6lc6l" [88c0cead-b0d0-4699-b183-dab722233906] Running
	I0417 19:13:01.684947  694161 system_pods.go:89] "registry-hlj26" [bd1989fe-0b5a-41a4-ae03-88af2d34eb0d] Running
	I0417 19:13:01.684951  694161 system_pods.go:89] "registry-proxy-qwqgq" [e06390fb-d1dc-4627-80a9-02edada26c01] Running
	I0417 19:13:01.684954  694161 system_pods.go:89] "snapshot-controller-745499f584-4wzw2" [4eb7bf86-05b3-4e06-83e6-05e94dd20f58] Running
	I0417 19:13:01.684959  694161 system_pods.go:89] "snapshot-controller-745499f584-j78nn" [84496916-d4e7-4a9b-b8e3-dce36db8163d] Running
	I0417 19:13:01.684965  694161 system_pods.go:89] "storage-provisioner" [71a1577e-c751-48be-b51e-ae0981fefa0b] Running
	I0417 19:13:01.684974  694161 system_pods.go:126] duration metric: took 10.207794ms to wait for k8s-apps to be running ...
	I0417 19:13:01.684985  694161 system_svc.go:44] waiting for kubelet service to be running ....
	I0417 19:13:01.685047  694161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:13:01.697447  694161 system_svc.go:56] duration metric: took 12.451388ms WaitForService to wait for kubelet
	I0417 19:13:01.697477  694161 kubeadm.go:576] duration metric: took 2m36.426831573s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:13:01.697496  694161 node_conditions.go:102] verifying NodePressure condition ...
	I0417 19:13:01.700854  694161 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0417 19:13:01.700889  694161 node_conditions.go:123] node cpu capacity is 2
	I0417 19:13:01.700902  694161 node_conditions.go:105] duration metric: took 3.399502ms to run NodePressure ...
	I0417 19:13:01.700915  694161 start.go:240] waiting for startup goroutines ...
	I0417 19:13:01.700923  694161 start.go:245] waiting for cluster config update ...
	I0417 19:13:01.700940  694161 start.go:254] writing updated cluster config ...
	I0417 19:13:01.701278  694161 ssh_runner.go:195] Run: rm -f paused
	I0417 19:13:01.923739  694161 start.go:600] kubectl: 1.29.4, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0417 19:13:01.926029  694161 out.go:177] * Done! kubectl is now configured to use "addons-873604" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 17 19:18:22 addons-873604 crio[912]: time="2024-04-17 19:18:22.010958426Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 17 19:18:22 addons-873604 crio[912]: time="2024-04-17 19:18:22.084671950Z" level=info msg="Created container 63b95405d23c8b2fa3c38d3147e86ef16c9bd15be70f625cd5037cf443d7f4c7: default/hello-world-app-86c47465fc-dpqqm/hello-world-app" id=fc2c4441-a930-40c8-a98d-c605d92eb7ea name=/runtime.v1.RuntimeService/CreateContainer
	Apr 17 19:18:22 addons-873604 crio[912]: time="2024-04-17 19:18:22.086667248Z" level=info msg="Starting container: 63b95405d23c8b2fa3c38d3147e86ef16c9bd15be70f625cd5037cf443d7f4c7" id=a194949f-67b6-484f-b34b-0515a9727087 name=/runtime.v1.RuntimeService/StartContainer
	Apr 17 19:18:22 addons-873604 crio[912]: time="2024-04-17 19:18:22.094912878Z" level=info msg="Started container" PID=8930 containerID=63b95405d23c8b2fa3c38d3147e86ef16c9bd15be70f625cd5037cf443d7f4c7 description=default/hello-world-app-86c47465fc-dpqqm/hello-world-app id=a194949f-67b6-484f-b34b-0515a9727087 name=/runtime.v1.RuntimeService/StartContainer sandboxID=95ab8090e20460c5abd45df22885b7773b0190aacd76b59af7eaf37f133533b9
	Apr 17 19:18:22 addons-873604 conmon[8917]: conmon 63b95405d23c8b2fa3c3 <ninfo>: container 8930 exited with status 1
	Apr 17 19:18:22 addons-873604 crio[912]: time="2024-04-17 19:18:22.962916132Z" level=info msg="Removing container: c2890bc60e42bfa91aa141dce53ff440344f7ae67f5f32fa3b685315fb266c21" id=dbf59587-0138-4500-b25c-34376dc39b2f name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 17 19:18:22 addons-873604 crio[912]: time="2024-04-17 19:18:22.983923021Z" level=info msg="Removed container c2890bc60e42bfa91aa141dce53ff440344f7ae67f5f32fa3b685315fb266c21: default/hello-world-app-86c47465fc-dpqqm/hello-world-app" id=dbf59587-0138-4500-b25c-34376dc39b2f name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 17 19:19:44 addons-873604 crio[912]: time="2024-04-17 19:19:44.008072437Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=06e446e6-7bcb-4194-be05-5ed793783afd name=/runtime.v1.ImageService/ImageStatus
	Apr 17 19:19:44 addons-873604 crio[912]: time="2024-04-17 19:19:44.008326265Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=06e446e6-7bcb-4194-be05-5ed793783afd name=/runtime.v1.ImageService/ImageStatus
	Apr 17 19:19:44 addons-873604 crio[912]: time="2024-04-17 19:19:44.009130216Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=e373dd3f-640a-420c-a193-73cdd100b8b4 name=/runtime.v1.ImageService/ImageStatus
	Apr 17 19:19:44 addons-873604 crio[912]: time="2024-04-17 19:19:44.009346744Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=e373dd3f-640a-420c-a193-73cdd100b8b4 name=/runtime.v1.ImageService/ImageStatus
	Apr 17 19:19:44 addons-873604 crio[912]: time="2024-04-17 19:19:44.010324493Z" level=info msg="Creating container: default/hello-world-app-86c47465fc-dpqqm/hello-world-app" id=caa8b112-f8c8-48de-bb0a-dd9f865e9bb2 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 17 19:19:44 addons-873604 crio[912]: time="2024-04-17 19:19:44.010449002Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 17 19:19:44 addons-873604 crio[912]: time="2024-04-17 19:19:44.072793129Z" level=info msg="Created container 10cd81cfbbcf7b0d9b4fb54057416e0c5dacff81215ea010b39c18a9d6f0648c: default/hello-world-app-86c47465fc-dpqqm/hello-world-app" id=caa8b112-f8c8-48de-bb0a-dd9f865e9bb2 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 17 19:19:44 addons-873604 crio[912]: time="2024-04-17 19:19:44.073494503Z" level=info msg="Starting container: 10cd81cfbbcf7b0d9b4fb54057416e0c5dacff81215ea010b39c18a9d6f0648c" id=a6159a0e-1214-4120-b3ee-17c4a279e2ec name=/runtime.v1.RuntimeService/StartContainer
	Apr 17 19:19:44 addons-873604 crio[912]: time="2024-04-17 19:19:44.079388569Z" level=info msg="Started container" PID=8991 containerID=10cd81cfbbcf7b0d9b4fb54057416e0c5dacff81215ea010b39c18a9d6f0648c description=default/hello-world-app-86c47465fc-dpqqm/hello-world-app id=a6159a0e-1214-4120-b3ee-17c4a279e2ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=95ab8090e20460c5abd45df22885b7773b0190aacd76b59af7eaf37f133533b9
	Apr 17 19:19:44 addons-873604 conmon[8980]: conmon 10cd81cfbbcf7b0d9b4f <ninfo>: container 8991 exited with status 1
	Apr 17 19:19:44 addons-873604 crio[912]: time="2024-04-17 19:19:44.134900011Z" level=info msg="Removing container: 63b95405d23c8b2fa3c38d3147e86ef16c9bd15be70f625cd5037cf443d7f4c7" id=49d1924b-98ee-4924-b786-7679ca3b4d3c name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 17 19:19:44 addons-873604 crio[912]: time="2024-04-17 19:19:44.155389001Z" level=info msg="Removed container 63b95405d23c8b2fa3c38d3147e86ef16c9bd15be70f625cd5037cf443d7f4c7: default/hello-world-app-86c47465fc-dpqqm/hello-world-app" id=49d1924b-98ee-4924-b786-7679ca3b4d3c name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 17 19:20:10 addons-873604 crio[912]: time="2024-04-17 19:20:10.400741219Z" level=info msg="Stopping container: 97e99410da22d18466300d3822420c3d5591459f7ec534db2121a16f448f9d97 (timeout: 30s)" id=eae1f37e-d5a1-4626-969c-f755b9cb0d81 name=/runtime.v1.RuntimeService/StopContainer
	Apr 17 19:20:11 addons-873604 crio[912]: time="2024-04-17 19:20:11.577593167Z" level=info msg="Stopped container 97e99410da22d18466300d3822420c3d5591459f7ec534db2121a16f448f9d97: kube-system/metrics-server-c59844bb4-q7zp5/metrics-server" id=eae1f37e-d5a1-4626-969c-f755b9cb0d81 name=/runtime.v1.RuntimeService/StopContainer
	Apr 17 19:20:11 addons-873604 crio[912]: time="2024-04-17 19:20:11.579277499Z" level=info msg="Stopping pod sandbox: 363043df0486c004d1bcf1884498d2b79b49aaf001f7f69a28663f009b399c7a" id=9696b2c3-142d-4c69-b394-bfb4b214497f name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 17 19:20:11 addons-873604 crio[912]: time="2024-04-17 19:20:11.579789651Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-q7zp5 Namespace:kube-system ID:363043df0486c004d1bcf1884498d2b79b49aaf001f7f69a28663f009b399c7a UID:da8a5501-6baf-4977-905c-f81fe98110e2 NetNS:/var/run/netns/790dde49-3a89-4b58-b594-dcb54e112934 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 17 19:20:11 addons-873604 crio[912]: time="2024-04-17 19:20:11.580082041Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-q7zp5 from CNI network \"kindnet\" (type=ptp)"
	Apr 17 19:20:11 addons-873604 crio[912]: time="2024-04-17 19:20:11.598579812Z" level=info msg="Stopped pod sandbox: 363043df0486c004d1bcf1884498d2b79b49aaf001f7f69a28663f009b399c7a" id=9696b2c3-142d-4c69-b394-bfb4b214497f name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10cd81cfbbcf7       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                        27 seconds ago      Exited              hello-world-app           5                   95ab8090e2046       hello-world-app-86c47465fc-dpqqm
	5768e9026a041       docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                         5 minutes ago       Running             nginx                     0                   40ee22d252486       nginx
	7c983645ca770       ghcr.io/headlamp-k8s/headlamp@sha256:1f277f42730106526a27560517a4c5f9253ccb2477be458986f44a791158a02c                   6 minutes ago       Running             headlamp                  0                   3035e4d6159cf       headlamp-7559bf459f-4rl7c
	f1ee6f9af7955       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            8 minutes ago       Running             gcp-auth                  0                   a1051241b01f6       gcp-auth-5db96cd9b4-j8trl
	bdac711194bbe       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         8 minutes ago       Running             yakd                      0                   f8052eddb7691       yakd-dashboard-5ddbf7d777-shv8z
	97e99410da22d       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   363043df0486c       metrics-server-c59844bb4-q7zp5
	348b13f6f5fc0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        9 minutes ago       Running             storage-provisioner       0                   9533e27c6447d       storage-provisioner
	07b8ad1d41f6b       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        9 minutes ago       Running             coredns                   0                   60dc8e785aa14       coredns-7db6d8ff4d-tf89r
	fcb960be1e4e3       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                        9 minutes ago       Running             kindnet-cni               0                   95e574383062b       kindnet-xrsgr
	86f101ac5b7e9       aa30953d3c2b4acff6d925faf6c4af0ac0577bf606ddf8491ab14ca0cabba691                                                        9 minutes ago       Running             kube-proxy                0                   d23200fdc7c43       kube-proxy-zcxl8
	e9f56dc186c7a       425022910de1d4ab7b21888dfad9e8f9da04f37712dccd64347bbfd735b80657                                                        10 minutes ago      Running             kube-scheduler            0                   12c490b1344f4       kube-scheduler-addons-873604
	97206e2d817c0       88320cfaf308b507d1d1d6fa062612281320e1ca1add79c7b22b5b0a19756aa1                                                        10 minutes ago      Running             kube-controller-manager   0                   a915e5f0093c1       kube-controller-manager-addons-873604
	e7fa33d45e130       78b24de5c18c446278f50432f209bd786ff0d05a4d09b222d1f17998ae2ce121                                                        10 minutes ago      Running             kube-apiserver            0                   e7e7213a05b6e       kube-apiserver-addons-873604
	601178ae2a7a1       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        10 minutes ago      Running             etcd                      0                   67ea6d389528c       etcd-addons-873604
	
	
	==> coredns [07b8ad1d41f6b99a33bded223cd274dd0ded539371632beac88523c59f387311] <==
	[INFO] 10.244.0.20:41624 - 16372 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056605s
	[INFO] 10.244.0.20:42073 - 9120 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001999449s
	[INFO] 10.244.0.20:41624 - 22779 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067461s
	[INFO] 10.244.0.20:42073 - 36836 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000238788s
	[INFO] 10.244.0.20:41624 - 62401 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001198673s
	[INFO] 10.244.0.20:41624 - 35033 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002058837s
	[INFO] 10.244.0.20:41624 - 8543 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000115607s
	[INFO] 10.244.0.20:42043 - 14800 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00011349s
	[INFO] 10.244.0.20:42043 - 13218 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000102479s
	[INFO] 10.244.0.20:34666 - 46433 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066804s
	[INFO] 10.244.0.20:42043 - 32959 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000153826s
	[INFO] 10.244.0.20:34666 - 4966 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051461s
	[INFO] 10.244.0.20:42043 - 25755 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067264s
	[INFO] 10.244.0.20:34666 - 757 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070907s
	[INFO] 10.244.0.20:42043 - 17547 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063678s
	[INFO] 10.244.0.20:34666 - 55665 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003986s
	[INFO] 10.244.0.20:42043 - 61124 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039745s
	[INFO] 10.244.0.20:34666 - 44822 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040105s
	[INFO] 10.244.0.20:34666 - 7842 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00016999s
	[INFO] 10.244.0.20:42043 - 46459 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001286237s
	[INFO] 10.244.0.20:34666 - 12996 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001534148s
	[INFO] 10.244.0.20:42043 - 46881 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001279779s
	[INFO] 10.244.0.20:42043 - 729 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000212934s
	[INFO] 10.244.0.20:34666 - 38077 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00113648s
	[INFO] 10.244.0.20:34666 - 8603 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074541s
	
	
	==> describe nodes <==
	Name:               addons-873604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-873604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=addons-873604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T19_10_11_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-873604
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 19:10:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-873604
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:20:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:17:20 +0000   Wed, 17 Apr 2024 19:10:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:17:20 +0000   Wed, 17 Apr 2024 19:10:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:17:20 +0000   Wed, 17 Apr 2024 19:10:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:17:20 +0000   Wed, 17 Apr 2024 19:10:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-873604
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f283e0278094667a9e13c23300099a6
	  System UUID:                dc0c30cc-5a3b-4082-8b79-86e7972a9cc9
	  Boot ID:                    ab21f790-14ed-4d12-b82f-2c18616b58d7
	  Kernel Version:             5.15.0-1057-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-dpqqm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  gcp-auth                    gcp-auth-5db96cd9b4-j8trl                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  headlamp                    headlamp-7559bf459f-4rl7c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 coredns-7db6d8ff4d-tf89r                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m47s
	  kube-system                 etcd-addons-873604                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-xrsgr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m46s
	  kube-system                 kube-apiserver-addons-873604             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-addons-873604    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-zcxl8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	  kube-system                 kube-scheduler-addons-873604             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m41s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-shv8z          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     9m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9m41s              kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-873604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-873604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node addons-873604 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-873604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-873604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-873604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m47s              node-controller  Node addons-873604 event: Registered Node addons-873604 in Controller
	  Normal  NodeReady                9m14s              kubelet          Node addons-873604 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001122] FS-Cache: O-key=[8] '176fed0000000000'
	[  +0.000728] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000008d8d0d3c{9p.inode} n=00000000375612da
	[  +0.001057] FS-Cache: N-key=[8] '176fed0000000000'
	[  +0.002899] FS-Cache: Duplicate cookie detected
	[  +0.000729] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000969] FS-Cache: O-cookie d=000000008d8d0d3c{9p.inode} n=00000000222534be
	[  +0.001144] FS-Cache: O-key=[8] '176fed0000000000'
	[  +0.000797] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000974] FS-Cache: N-cookie d=000000008d8d0d3c{9p.inode} n=00000000efdc28ce
	[  +0.001067] FS-Cache: N-key=[8] '176fed0000000000'
	[  +2.738471] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=000000008d8d0d3c{9p.inode} n=0000000071b913e3
	[  +0.001171] FS-Cache: O-key=[8] '166fed0000000000'
	[  +0.000734] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=000000008d8d0d3c{9p.inode} n=00000000cfcbb435
	[  +0.001078] FS-Cache: N-key=[8] '166fed0000000000'
	[  +0.344778] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001021] FS-Cache: O-cookie d=000000008d8d0d3c{9p.inode} n=000000004c6b74e8
	[  +0.001035] FS-Cache: O-key=[8] '1c6fed0000000000'
	[  +0.000749] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=000000008d8d0d3c{9p.inode} n=00000000b0fc5245
	[  +0.001098] FS-Cache: N-key=[8] '1c6fed0000000000'
	
	
	==> etcd [601178ae2a7a169ee4d3b2e90ffa40bccaf408ddb0708275e378d13e89d818b1] <==
	{"level":"info","ts":"2024-04-17T19:10:05.168475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-17T19:10:05.168501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-04-17T19:10:05.168518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-04-17T19:10:05.168525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-17T19:10:05.168542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-04-17T19:10:05.16855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-17T19:10:05.172531Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:10:05.17592Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-873604 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-17T19:10:05.176072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:10:05.176122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:10:05.178083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-17T19:10:05.17819Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:10:05.178267Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:10:05.178295Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:10:05.192451Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-17T19:10:05.192489Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-17T19:10:05.205482Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-04-17T19:10:26.102111Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.981924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-17T19:10:26.111306Z","caller":"traceutil/trace.go:171","msg":"trace[449486202] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"165.188546ms","start":"2024-04-17T19:10:25.946093Z","end":"2024-04-17T19:10:26.111282Z","steps":["trace[449486202] 'get authentication metadata'  (duration: 83.567129ms)","trace[449486202] 'range keys from in-memory index tree'  (duration: 72.168959ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-17T19:10:26.121079Z","caller":"traceutil/trace.go:171","msg":"trace[121349269] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"110.431613ms","start":"2024-04-17T19:10:26.010631Z","end":"2024-04-17T19:10:26.121063Z","steps":["trace[121349269] 'process raft request'  (duration: 48.993658ms)","trace[121349269] 'compare'  (duration: 51.638273ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:10:26.232148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.741996ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128028578057463715 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kindnet-xrsgr.17c726f712bdc733\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-xrsgr.17c726f712bdc733\" value_size:690 lease:8128028578057462975 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-17T19:10:26.232262Z","caller":"traceutil/trace.go:171","msg":"trace[9082437] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"133.441642ms","start":"2024-04-17T19:10:26.09881Z","end":"2024-04-17T19:10:26.232252Z","steps":["trace[9082437] 'process raft request'  (duration: 22.206244ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-17T19:20:05.896915Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1928}
	{"level":"info","ts":"2024-04-17T19:20:05.932476Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1928,"took":"34.849189ms","hash":720167024,"current-db-size-bytes":8699904,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":5386240,"current-db-size-in-use":"5.4 MB"}
	{"level":"info","ts":"2024-04-17T19:20:05.93253Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":720167024,"revision":1928,"compact-revision":-1}
	
	
	==> gcp-auth [f1ee6f9af795519a5b89e446d8d966b8898a1b9f77b9dcde765e3ab58ba288af] <==
	2024/04/17 19:11:48 GCP Auth Webhook started!
	2024/04/17 19:13:13 Ready to marshal response ...
	2024/04/17 19:13:13 Ready to write response ...
	2024/04/17 19:13:13 Ready to marshal response ...
	2024/04/17 19:13:13 Ready to write response ...
	2024/04/17 19:13:14 Ready to marshal response ...
	2024/04/17 19:13:14 Ready to write response ...
	2024/04/17 19:13:24 Ready to marshal response ...
	2024/04/17 19:13:24 Ready to write response ...
	2024/04/17 19:13:25 Ready to marshal response ...
	2024/04/17 19:13:25 Ready to write response ...
	2024/04/17 19:13:25 Ready to marshal response ...
	2024/04/17 19:13:25 Ready to write response ...
	2024/04/17 19:13:25 Ready to marshal response ...
	2024/04/17 19:13:25 Ready to write response ...
	2024/04/17 19:13:38 Ready to marshal response ...
	2024/04/17 19:13:38 Ready to write response ...
	2024/04/17 19:14:04 Ready to marshal response ...
	2024/04/17 19:14:04 Ready to write response ...
	2024/04/17 19:14:33 Ready to marshal response ...
	2024/04/17 19:14:33 Ready to write response ...
	2024/04/17 19:16:52 Ready to marshal response ...
	2024/04/17 19:16:52 Ready to write response ...
	
	
	==> kernel <==
	 19:20:12 up  3:02,  0 users,  load average: 0.15, 0.81, 1.82
	Linux addons-873604 5.15.0-1057-aws #63~20.04.1-Ubuntu SMP Mon Mar 25 10:29:14 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [fcb960be1e4e36ab2daf9232af1f0103f9a78d388cbee409fa9d031dc1f32ce6] <==
	I0417 19:18:08.091274       1 main.go:227] handling current node
	I0417 19:18:18.097919       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:18:18.097950       1 main.go:227] handling current node
	I0417 19:18:28.109087       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:18:28.109116       1 main.go:227] handling current node
	I0417 19:18:38.122513       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:18:38.122544       1 main.go:227] handling current node
	I0417 19:18:48.126366       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:18:48.126396       1 main.go:227] handling current node
	I0417 19:18:58.137974       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:18:58.138005       1 main.go:227] handling current node
	I0417 19:19:08.149298       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:19:08.149326       1 main.go:227] handling current node
	I0417 19:19:18.153076       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:19:18.153103       1 main.go:227] handling current node
	I0417 19:19:28.163875       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:19:28.163905       1 main.go:227] handling current node
	I0417 19:19:38.168229       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:19:38.168261       1 main.go:227] handling current node
	I0417 19:19:48.181076       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:19:48.181105       1 main.go:227] handling current node
	I0417 19:19:58.192281       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:19:58.192310       1 main.go:227] handling current node
	I0417 19:20:08.204799       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0417 19:20:08.204824       1 main.go:227] handling current node
	
	
	==> kube-apiserver [e7fa33d45e13088acc35af0f7974c2ad6367f93df73d8b908c400bd564810211] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0417 19:12:28.056295       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.21.207:443/apis/metrics.k8s.io/v1beta1: Get "https://10.101.21.207:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.101.21.207:443: connect: connection refused
	I0417 19:12:28.122806       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0417 19:13:24.923913       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.183.223"}
	E0417 19:13:41.159963       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0417 19:13:49.676039       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0417 19:14:21.177494       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0417 19:14:21.177661       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0417 19:14:21.200660       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0417 19:14:21.201098       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0417 19:14:21.223391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0417 19:14:21.223436       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0417 19:14:21.239298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0417 19:14:21.239355       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0417 19:14:21.259825       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0417 19:14:21.261697       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0417 19:14:21.361640       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W0417 19:14:22.223604       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0417 19:14:22.260756       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0417 19:14:22.284496       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0417 19:14:28.027539       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0417 19:14:29.064495       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0417 19:14:33.621642       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0417 19:14:33.942910       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.125.79"}
	I0417 19:16:53.100622       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.90.120"}
	
	
	==> kube-controller-manager [97206e2d817c0074c80b73cac71499e6402f5054ce786fbc53468051e08c4458] <==
	E0417 19:18:29.980270       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:18:34.788608       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:18:34.788644       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:18:36.125542       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:18:36.125585       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0417 19:18:38.020209       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="45.767µs"
	W0417 19:18:46.879638       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:18:46.879748       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:19:16.540043       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:19:16.540082       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:19:24.586307       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:19:24.586351       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:19:24.739084       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:19:24.739146       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:19:28.574072       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:19:28.574115       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0417 19:19:44.144403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="39.482µs"
	I0417 19:19:59.020223       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="41.894µs"
	W0417 19:20:03.099834       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:20:03.099875       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:20:08.519943       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:20:08.520014       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0417 19:20:08.878494       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0417 19:20:08.878550       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0417 19:20:10.387255       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="8.147µs"
	
	
	==> kube-proxy [86f101ac5b7e937cb73fb0f62ae5c544b527e551a3f72e1b001aa8e7c11a8d17] <==
	I0417 19:10:29.062651       1 server_linux.go:69] "Using iptables proxy"
	I0417 19:10:29.557588       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0417 19:10:30.059299       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0417 19:10:30.059442       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:10:30.131118       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0417 19:10:30.131408       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0417 19:10:30.131998       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:10:30.132362       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:10:30.132513       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:10:30.133803       1 config.go:192] "Starting service config controller"
	I0417 19:10:30.133919       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:10:30.134005       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:10:30.134050       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:10:30.139469       1 config.go:319] "Starting node config controller"
	I0417 19:10:30.140539       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:10:30.238057       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 19:10:30.249161       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:10:30.250075       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e9f56dc186c7a977b0698573fa6a2ba605e315d6266f288fb6600f09843b4c76] <==
	W0417 19:10:08.590406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0417 19:10:08.590758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0417 19:10:08.590446       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0417 19:10:08.590827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0417 19:10:08.590486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0417 19:10:08.590898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0417 19:10:08.594573       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0417 19:10:08.594682       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0417 19:10:09.416937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0417 19:10:09.416982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0417 19:10:09.458742       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0417 19:10:09.458872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0417 19:10:09.540585       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0417 19:10:09.540621       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0417 19:10:09.617942       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 19:10:09.617985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0417 19:10:09.687865       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0417 19:10:09.687925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0417 19:10:09.750333       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0417 19:10:09.750381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0417 19:10:09.750438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0417 19:10:09.750457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0417 19:10:09.762547       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0417 19:10:09.762590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0417 19:10:12.267357       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 17 19:19:04 addons-873604 kubelet[1495]: I0417 19:19:04.007266    1495 scope.go:117] "RemoveContainer" containerID="63b95405d23c8b2fa3c38d3147e86ef16c9bd15be70f625cd5037cf443d7f4c7"
	Apr 17 19:19:04 addons-873604 kubelet[1495]: E0417 19:19:04.007756    1495 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-dpqqm_default(69350501-1ef3-4b48-8974-ca1451a9592f)\"" pod="default/hello-world-app-86c47465fc-dpqqm" podUID="69350501-1ef3-4b48-8974-ca1451a9592f"
	Apr 17 19:19:17 addons-873604 kubelet[1495]: I0417 19:19:17.007667    1495 scope.go:117] "RemoveContainer" containerID="63b95405d23c8b2fa3c38d3147e86ef16c9bd15be70f625cd5037cf443d7f4c7"
	Apr 17 19:19:17 addons-873604 kubelet[1495]: E0417 19:19:17.007956    1495 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-dpqqm_default(69350501-1ef3-4b48-8974-ca1451a9592f)\"" pod="default/hello-world-app-86c47465fc-dpqqm" podUID="69350501-1ef3-4b48-8974-ca1451a9592f"
	Apr 17 19:19:31 addons-873604 kubelet[1495]: I0417 19:19:31.013071    1495 scope.go:117] "RemoveContainer" containerID="63b95405d23c8b2fa3c38d3147e86ef16c9bd15be70f625cd5037cf443d7f4c7"
	Apr 17 19:19:31 addons-873604 kubelet[1495]: E0417 19:19:31.013940    1495 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-dpqqm_default(69350501-1ef3-4b48-8974-ca1451a9592f)\"" pod="default/hello-world-app-86c47465fc-dpqqm" podUID="69350501-1ef3-4b48-8974-ca1451a9592f"
	Apr 17 19:19:44 addons-873604 kubelet[1495]: I0417 19:19:44.007332    1495 scope.go:117] "RemoveContainer" containerID="63b95405d23c8b2fa3c38d3147e86ef16c9bd15be70f625cd5037cf443d7f4c7"
	Apr 17 19:19:44 addons-873604 kubelet[1495]: I0417 19:19:44.129661    1495 scope.go:117] "RemoveContainer" containerID="63b95405d23c8b2fa3c38d3147e86ef16c9bd15be70f625cd5037cf443d7f4c7"
	Apr 17 19:19:44 addons-873604 kubelet[1495]: I0417 19:19:44.130031    1495 scope.go:117] "RemoveContainer" containerID="10cd81cfbbcf7b0d9b4fb54057416e0c5dacff81215ea010b39c18a9d6f0648c"
	Apr 17 19:19:44 addons-873604 kubelet[1495]: E0417 19:19:44.130359    1495 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-dpqqm_default(69350501-1ef3-4b48-8974-ca1451a9592f)\"" pod="default/hello-world-app-86c47465fc-dpqqm" podUID="69350501-1ef3-4b48-8974-ca1451a9592f"
	Apr 17 19:19:59 addons-873604 kubelet[1495]: I0417 19:19:59.008163    1495 scope.go:117] "RemoveContainer" containerID="10cd81cfbbcf7b0d9b4fb54057416e0c5dacff81215ea010b39c18a9d6f0648c"
	Apr 17 19:19:59 addons-873604 kubelet[1495]: E0417 19:19:59.008460    1495 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-dpqqm_default(69350501-1ef3-4b48-8974-ca1451a9592f)\"" pod="default/hello-world-app-86c47465fc-dpqqm" podUID="69350501-1ef3-4b48-8974-ca1451a9592f"
	Apr 17 19:20:10 addons-873604 kubelet[1495]: I0417 19:20:10.007689    1495 scope.go:117] "RemoveContainer" containerID="10cd81cfbbcf7b0d9b4fb54057416e0c5dacff81215ea010b39c18a9d6f0648c"
	Apr 17 19:20:10 addons-873604 kubelet[1495]: E0417 19:20:10.008465    1495 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-dpqqm_default(69350501-1ef3-4b48-8974-ca1451a9592f)\"" pod="default/hello-world-app-86c47465fc-dpqqm" podUID="69350501-1ef3-4b48-8974-ca1451a9592f"
	Apr 17 19:20:11 addons-873604 kubelet[1495]: E0417 19:20:11.053693    1495 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631, memory: /docker/3fc24619954a7593d0d4dd5ee93c7caab0c44eeb97c7a5046a09cfe1e919f631/system.slice/kubelet.service"
	Apr 17 19:20:11 addons-873604 kubelet[1495]: I0417 19:20:11.697717    1495 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z85gw\" (UniqueName: \"kubernetes.io/projected/da8a5501-6baf-4977-905c-f81fe98110e2-kube-api-access-z85gw\") pod \"da8a5501-6baf-4977-905c-f81fe98110e2\" (UID: \"da8a5501-6baf-4977-905c-f81fe98110e2\") "
	Apr 17 19:20:11 addons-873604 kubelet[1495]: I0417 19:20:11.697772    1495 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/da8a5501-6baf-4977-905c-f81fe98110e2-tmp-dir\") pod \"da8a5501-6baf-4977-905c-f81fe98110e2\" (UID: \"da8a5501-6baf-4977-905c-f81fe98110e2\") "
	Apr 17 19:20:11 addons-873604 kubelet[1495]: I0417 19:20:11.698575    1495 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8a5501-6baf-4977-905c-f81fe98110e2-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "da8a5501-6baf-4977-905c-f81fe98110e2" (UID: "da8a5501-6baf-4977-905c-f81fe98110e2"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Apr 17 19:20:11 addons-873604 kubelet[1495]: I0417 19:20:11.703341    1495 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8a5501-6baf-4977-905c-f81fe98110e2-kube-api-access-z85gw" (OuterVolumeSpecName: "kube-api-access-z85gw") pod "da8a5501-6baf-4977-905c-f81fe98110e2" (UID: "da8a5501-6baf-4977-905c-f81fe98110e2"). InnerVolumeSpecName "kube-api-access-z85gw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 17 19:20:11 addons-873604 kubelet[1495]: I0417 19:20:11.798242    1495 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-z85gw\" (UniqueName: \"kubernetes.io/projected/da8a5501-6baf-4977-905c-f81fe98110e2-kube-api-access-z85gw\") on node \"addons-873604\" DevicePath \"\""
	Apr 17 19:20:11 addons-873604 kubelet[1495]: I0417 19:20:11.798283    1495 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/da8a5501-6baf-4977-905c-f81fe98110e2-tmp-dir\") on node \"addons-873604\" DevicePath \"\""
	Apr 17 19:20:12 addons-873604 kubelet[1495]: I0417 19:20:12.192897    1495 scope.go:117] "RemoveContainer" containerID="97e99410da22d18466300d3822420c3d5591459f7ec534db2121a16f448f9d97"
	Apr 17 19:20:12 addons-873604 kubelet[1495]: I0417 19:20:12.235306    1495 scope.go:117] "RemoveContainer" containerID="97e99410da22d18466300d3822420c3d5591459f7ec534db2121a16f448f9d97"
	Apr 17 19:20:12 addons-873604 kubelet[1495]: E0417 19:20:12.236210    1495 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e99410da22d18466300d3822420c3d5591459f7ec534db2121a16f448f9d97\": container with ID starting with 97e99410da22d18466300d3822420c3d5591459f7ec534db2121a16f448f9d97 not found: ID does not exist" containerID="97e99410da22d18466300d3822420c3d5591459f7ec534db2121a16f448f9d97"
	Apr 17 19:20:12 addons-873604 kubelet[1495]: I0417 19:20:12.236267    1495 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e99410da22d18466300d3822420c3d5591459f7ec534db2121a16f448f9d97"} err="failed to get container status \"97e99410da22d18466300d3822420c3d5591459f7ec534db2121a16f448f9d97\": rpc error: code = NotFound desc = could not find container \"97e99410da22d18466300d3822420c3d5591459f7ec534db2121a16f448f9d97\": container with ID starting with 97e99410da22d18466300d3822420c3d5591459f7ec534db2121a16f448f9d97 not found: ID does not exist"
	
	
	==> storage-provisioner [348b13f6f5fc01b2dfacdc4caf00f99f248cb3578ddda6d5b6c3305ee786cfd0] <==
	I0417 19:10:58.983250       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0417 19:10:59.005117       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0417 19:10:59.005264       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0417 19:10:59.016530       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0417 19:10:59.016656       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"380d4bd5-c6be-4a43-a605-42fa0a26edb0", APIVersion:"v1", ResourceVersion:"929", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-873604_23055689-08a4-4681-b2fd-6136f51d4e9b became leader
	I0417 19:10:59.027267       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-873604_23055689-08a4-4681-b2fd-6136f51d4e9b!
	I0417 19:10:59.127664       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-873604_23055689-08a4-4681-b2fd-6136f51d4e9b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-873604 -n addons-873604
helpers_test.go:261: (dbg) Run:  kubectl --context addons-873604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (364.68s)

                                                
                                    

Test pass (296/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.08
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0-rc.2/json-events 6.67
13 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
17 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.09
18 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.21
19 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 217.05
29 TestAddons/parallel/Registry 16.23
31 TestAddons/parallel/InspektorGadget 11.8
35 TestAddons/parallel/CSI 45.34
36 TestAddons/parallel/Headlamp 12.37
37 TestAddons/parallel/CloudSpanner 5.61
38 TestAddons/parallel/LocalPath 54.8
39 TestAddons/parallel/NvidiaDevicePlugin 5.63
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.17
44 TestAddons/StoppedEnableDisable 12.28
45 TestCertOptions 38.85
46 TestCertExpiration 237.21
48 TestForceSystemdFlag 38.23
49 TestForceSystemdEnv 46.6
55 TestErrorSpam/setup 28.35
56 TestErrorSpam/start 0.7
57 TestErrorSpam/status 0.99
58 TestErrorSpam/pause 1.73
59 TestErrorSpam/unpause 1.79
60 TestErrorSpam/stop 1.45
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 79.77
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 38.33
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.59
72 TestFunctional/serial/CacheCmd/cache/add_local 1.14
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
74 TestFunctional/serial/CacheCmd/cache/list 0.07
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
77 TestFunctional/serial/CacheCmd/cache/delete 0.14
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 33.89
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.67
83 TestFunctional/serial/LogsFileCmd 1.7
84 TestFunctional/serial/InvalidService 3.93
86 TestFunctional/parallel/ConfigCmd 0.57
87 TestFunctional/parallel/DashboardCmd 9.26
88 TestFunctional/parallel/DryRun 0.63
89 TestFunctional/parallel/InternationalLanguage 0.26
90 TestFunctional/parallel/StatusCmd 1.03
94 TestFunctional/parallel/ServiceCmdConnect 7.62
95 TestFunctional/parallel/AddonsCmd 0.17
96 TestFunctional/parallel/PersistentVolumeClaim 26.94
98 TestFunctional/parallel/SSHCmd 0.67
99 TestFunctional/parallel/CpCmd 2.03
101 TestFunctional/parallel/FileSync 0.36
102 TestFunctional/parallel/CertSync 1.98
106 TestFunctional/parallel/NodeLabels 0.09
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
110 TestFunctional/parallel/License 0.37
111 TestFunctional/parallel/Version/short 0.09
112 TestFunctional/parallel/Version/components 0.93
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
117 TestFunctional/parallel/ImageCommands/ImageBuild 5.18
118 TestFunctional/parallel/ImageCommands/Setup 2.53
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.66
123 TestFunctional/parallel/ServiceCmd/DeployApp 10.3
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.05
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.63
126 TestFunctional/parallel/ServiceCmd/List 0.45
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
129 TestFunctional/parallel/ServiceCmd/Format 0.48
130 TestFunctional/parallel/ServiceCmd/URL 0.47
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.55
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.07
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.85
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.56
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.99
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
147 TestFunctional/parallel/ProfileCmd/profile_list 0.4
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
149 TestFunctional/parallel/MountCmd/any-port 7.93
150 TestFunctional/parallel/MountCmd/specific-port 1.91
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.48
152 TestFunctional/delete_addon-resizer_images 0.1
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 158.68
159 TestMultiControlPlane/serial/DeployApp 8.08
160 TestMultiControlPlane/serial/PingHostFromPods 1.79
161 TestMultiControlPlane/serial/AddWorkerNode 55.89
162 TestMultiControlPlane/serial/NodeLabels 0.12
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.78
164 TestMultiControlPlane/serial/CopyFile 19.22
165 TestMultiControlPlane/serial/StopSecondaryNode 12.75
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.53
167 TestMultiControlPlane/serial/RestartSecondaryNode 35.69
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.59
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 192.63
170 TestMultiControlPlane/serial/DeleteSecondaryNode 13.04
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
172 TestMultiControlPlane/serial/StopCluster 35.72
173 TestMultiControlPlane/serial/RestartCluster 74.45
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
175 TestMultiControlPlane/serial/AddSecondaryNode 60.81
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
180 TestJSONOutput/start/Command 52.73
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.74
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.69
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.84
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.23
205 TestKicCustomNetwork/create_custom_network 38.79
206 TestKicCustomNetwork/use_default_bridge_network 33.58
207 TestKicExistingNetwork 36.13
208 TestKicCustomSubnet 31.35
209 TestKicStaticIP 31.86
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 66.11
214 TestMountStart/serial/StartWithMountFirst 7.52
215 TestMountStart/serial/VerifyMountFirst 0.27
216 TestMountStart/serial/StartWithMountSecond 9.31
217 TestMountStart/serial/VerifyMountSecond 0.28
218 TestMountStart/serial/DeleteFirst 1.61
219 TestMountStart/serial/VerifyMountPostDelete 0.27
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 7.74
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 92.35
226 TestMultiNode/serial/DeployApp2Nodes 4.69
227 TestMultiNode/serial/PingHostFrom2Pods 1.05
228 TestMultiNode/serial/AddNode 47.84
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.36
231 TestMultiNode/serial/CopyFile 10.24
232 TestMultiNode/serial/StopNode 2.26
233 TestMultiNode/serial/StartAfterStop 10.43
234 TestMultiNode/serial/RestartKeepsNodes 88.55
235 TestMultiNode/serial/DeleteNode 5.24
236 TestMultiNode/serial/StopMultiNode 23.83
237 TestMultiNode/serial/RestartMultiNode 61
238 TestMultiNode/serial/ValidateNameConflict 34.12
243 TestPreload 114.13
245 TestScheduledStopUnix 105.36
248 TestInsufficientStorage 11.04
249 TestRunningBinaryUpgrade 92.96
251 TestKubernetesUpgrade 393.37
252 TestMissingContainerUpgrade 147.6
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 41.74
256 TestNoKubernetes/serial/StartWithStopK8s 26.2
257 TestNoKubernetes/serial/Start 8.79
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
259 TestNoKubernetes/serial/ProfileList 7.52
260 TestNoKubernetes/serial/Stop 1.23
261 TestNoKubernetes/serial/StartNoArgs 7.49
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
263 TestStoppedBinaryUpgrade/Setup 1.11
264 TestStoppedBinaryUpgrade/Upgrade 78.08
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
274 TestPause/serial/Start 78.29
275 TestPause/serial/SecondStartNoReconfiguration 33.03
276 TestPause/serial/Pause 1.21
277 TestPause/serial/VerifyStatus 0.51
278 TestPause/serial/Unpause 0.93
279 TestPause/serial/PauseAgain 1.21
280 TestPause/serial/DeletePaused 3.06
281 TestPause/serial/VerifyDeletedResources 0.7
289 TestNetworkPlugins/group/false 5.56
294 TestStartStop/group/old-k8s-version/serial/FirstStart 157.39
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.82
297 TestStartStop/group/no-preload/serial/FirstStart 63.02
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.89
299 TestStartStop/group/old-k8s-version/serial/Stop 14.56
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
301 TestStartStop/group/old-k8s-version/serial/SecondStart 148.17
302 TestStartStop/group/no-preload/serial/DeployApp 9.49
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.44
304 TestStartStop/group/no-preload/serial/Stop 12.71
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/no-preload/serial/SecondStart 266.13
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
310 TestStartStop/group/old-k8s-version/serial/Pause 2.99
312 TestStartStop/group/embed-certs/serial/FirstStart 78.89
313 TestStartStop/group/embed-certs/serial/DeployApp 9.33
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
315 TestStartStop/group/embed-certs/serial/Stop 12.02
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
317 TestStartStop/group/embed-certs/serial/SecondStart 274.28
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
321 TestStartStop/group/no-preload/serial/Pause 3.09
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.97
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.17
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 295.78
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
332 TestStartStop/group/embed-certs/serial/Pause 3.13
334 TestStartStop/group/newest-cni/serial/FirstStart 47.2
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
337 TestStartStop/group/newest-cni/serial/Stop 1.28
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
339 TestStartStop/group/newest-cni/serial/SecondStart 16.87
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
343 TestStartStop/group/newest-cni/serial/Pause 2.99
344 TestNetworkPlugins/group/auto/Start 78.57
345 TestNetworkPlugins/group/auto/KubeletFlags 0.35
346 TestNetworkPlugins/group/auto/NetCatPod 10.39
347 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
348 TestNetworkPlugins/group/auto/DNS 0.18
349 TestNetworkPlugins/group/auto/Localhost 0.16
350 TestNetworkPlugins/group/auto/HairPin 0.15
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.39
354 TestNetworkPlugins/group/kindnet/Start 85.75
355 TestNetworkPlugins/group/calico/Start 77.76
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.55
359 TestNetworkPlugins/group/kindnet/NetCatPod 12.27
360 TestNetworkPlugins/group/calico/KubeletFlags 0.45
361 TestNetworkPlugins/group/calico/NetCatPod 10.45
362 TestNetworkPlugins/group/kindnet/DNS 0.22
363 TestNetworkPlugins/group/kindnet/Localhost 0.15
364 TestNetworkPlugins/group/kindnet/HairPin 0.15
365 TestNetworkPlugins/group/calico/DNS 0.3
366 TestNetworkPlugins/group/calico/Localhost 0.17
367 TestNetworkPlugins/group/calico/HairPin 0.18
368 TestNetworkPlugins/group/custom-flannel/Start 72.07
369 TestNetworkPlugins/group/enable-default-cni/Start 96.16
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
372 TestNetworkPlugins/group/custom-flannel/DNS 0.2
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.31
377 TestNetworkPlugins/group/flannel/Start 70.3
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.34
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.35
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.27
381 TestNetworkPlugins/group/bridge/Start 63.77
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
384 TestNetworkPlugins/group/flannel/NetCatPod 11.41
385 TestNetworkPlugins/group/flannel/DNS 0.19
386 TestNetworkPlugins/group/flannel/Localhost 0.15
387 TestNetworkPlugins/group/flannel/HairPin 0.17
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
389 TestNetworkPlugins/group/bridge/NetCatPod 12.46
390 TestNetworkPlugins/group/bridge/DNS 0.28
391 TestNetworkPlugins/group/bridge/Localhost 0.2
392 TestNetworkPlugins/group/bridge/HairPin 0.21
x
+
TestDownloadOnly/v1.20.0/json-events (9.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-251262 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-251262 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.078278831s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-251262
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-251262: exit status 85 (80.301325ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-251262 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC |          |
	|         | -p download-only-251262        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 19:09:06
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 19:09:06.827459  693523 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:09:06.827645  693523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:09:06.827660  693523 out.go:304] Setting ErrFile to fd 2...
	I0417 19:09:06.827667  693523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:09:06.827962  693523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
	W0417 19:09:06.828175  693523 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18665-688109/.minikube/config/config.json: open /home/jenkins/minikube-integration/18665-688109/.minikube/config/config.json: no such file or directory
	I0417 19:09:06.828726  693523 out.go:298] Setting JSON to true
	I0417 19:09:06.829694  693523 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10294,"bootTime":1713370653,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0417 19:09:06.829770  693523 start.go:139] virtualization:  
	I0417 19:09:06.832969  693523 out.go:97] [download-only-251262] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0417 19:09:06.835227  693523 out.go:169] MINIKUBE_LOCATION=18665
	W0417 19:09:06.833137  693523 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball: no such file or directory
	I0417 19:09:06.833187  693523 notify.go:220] Checking for updates...
	I0417 19:09:06.839889  693523 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:09:06.842247  693523 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	I0417 19:09:06.844197  693523 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	I0417 19:09:06.846170  693523 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0417 19:09:06.850453  693523 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0417 19:09:06.850733  693523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:09:06.870529  693523 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0417 19:09:06.870633  693523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:09:06.926622  693523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-17 19:09:06.917724201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:09:06.926730  693523 docker.go:295] overlay module found
	I0417 19:09:06.928931  693523 out.go:97] Using the docker driver based on user configuration
	I0417 19:09:06.928969  693523 start.go:297] selected driver: docker
	I0417 19:09:06.928978  693523 start.go:901] validating driver "docker" against <nil>
	I0417 19:09:06.929108  693523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:09:06.982597  693523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-17 19:09:06.974021241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:09:06.982792  693523 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 19:09:06.983056  693523 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0417 19:09:06.983217  693523 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0417 19:09:06.985721  693523 out.go:169] Using Docker driver with root privileges
	I0417 19:09:06.987933  693523 cni.go:84] Creating CNI manager for ""
	I0417 19:09:06.987958  693523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0417 19:09:06.987968  693523 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0417 19:09:06.988066  693523 start.go:340] cluster config:
	{Name:download-only-251262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-251262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:09:06.990303  693523 out.go:97] Starting "download-only-251262" primary control-plane node in "download-only-251262" cluster
	I0417 19:09:06.990335  693523 cache.go:121] Beginning downloading kic base image for docker with crio
	I0417 19:09:06.992627  693523 out.go:97] Pulling base image v0.0.43-1713236840-18649 ...
	I0417 19:09:06.992658  693523 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0417 19:09:06.992791  693523 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e in local docker daemon
	I0417 19:09:07.008235  693523 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e to local cache
	I0417 19:09:07.008465  693523 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e in local cache directory
	I0417 19:09:07.008565  693523 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e to local cache
	I0417 19:09:07.063786  693523 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0417 19:09:07.063812  693523 cache.go:56] Caching tarball of preloaded images
	I0417 19:09:07.063952  693523 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0417 19:09:07.066939  693523 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0417 19:09:07.066975  693523 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0417 19:09:07.313124  693523 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-251262 host does not exist
	  To start a cluster, run: "minikube start -p download-only-251262"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-251262
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (6.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-545184 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-545184 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.671572411s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (6.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-545184
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-545184: exit status 85 (89.958997ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-251262 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC |                     |
	|         | -p download-only-251262           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| delete  | -p download-only-251262           | download-only-251262 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC | 17 Apr 24 19:09 UTC |
	| start   | -o=json --download-only           | download-only-545184 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:09 UTC |                     |
	|         | -p download-only-545184           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 19:09:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 19:09:16.329181  693692 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:09:16.329339  693692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:09:16.329347  693692 out.go:304] Setting ErrFile to fd 2...
	I0417 19:09:16.329353  693692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:09:16.329594  693692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
	I0417 19:09:16.330008  693692 out.go:298] Setting JSON to true
	I0417 19:09:16.330922  693692 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10303,"bootTime":1713370653,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0417 19:09:16.330990  693692 start.go:139] virtualization:  
	I0417 19:09:16.333750  693692 out.go:97] [download-only-545184] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0417 19:09:16.336539  693692 out.go:169] MINIKUBE_LOCATION=18665
	I0417 19:09:16.334010  693692 notify.go:220] Checking for updates...
	I0417 19:09:16.338757  693692 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:09:16.341221  693692 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	I0417 19:09:16.343375  693692 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	I0417 19:09:16.345439  693692 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0417 19:09:16.349841  693692 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0417 19:09:16.350141  693692 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:09:16.370525  693692 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0417 19:09:16.370649  693692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:09:16.427406  693692 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-17 19:09:16.417929891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:09:16.427518  693692 docker.go:295] overlay module found
	I0417 19:09:16.429779  693692 out.go:97] Using the docker driver based on user configuration
	I0417 19:09:16.429809  693692 start.go:297] selected driver: docker
	I0417 19:09:16.429815  693692 start.go:901] validating driver "docker" against <nil>
	I0417 19:09:16.429931  693692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:09:16.486795  693692 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-17 19:09:16.477996195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:09:16.486957  693692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 19:09:16.487262  693692 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0417 19:09:16.487458  693692 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0417 19:09:16.489753  693692 out.go:169] Using Docker driver with root privileges
	I0417 19:09:16.492119  693692 cni.go:84] Creating CNI manager for ""
	I0417 19:09:16.492151  693692 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0417 19:09:16.492162  693692 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0417 19:09:16.492258  693692 start.go:340] cluster config:
	{Name:download-only-545184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-545184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:09:16.494576  693692 out.go:97] Starting "download-only-545184" primary control-plane node in "download-only-545184" cluster
	I0417 19:09:16.494602  693692 cache.go:121] Beginning downloading kic base image for docker with crio
	I0417 19:09:16.497004  693692 out.go:97] Pulling base image v0.0.43-1713236840-18649 ...
	I0417 19:09:16.497045  693692 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:09:16.497170  693692 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e in local docker daemon
	I0417 19:09:16.511299  693692 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e to local cache
	I0417 19:09:16.511442  693692 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e in local cache directory
	I0417 19:09:16.511463  693692 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e in local cache directory, skipping pull
	I0417 19:09:16.511469  693692 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e exists in cache, skipping pull
	I0417 19:09:16.511477  693692 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e as a tarball
	I0417 19:09:16.569077  693692 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0417 19:09:16.569102  693692 cache.go:56] Caching tarball of preloaded images
	I0417 19:09:16.570089  693692 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:09:16.572728  693692 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0417 19:09:16.572773  693692 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0417 19:09:16.681200  693692 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:9794e1af5fd17bc197170641afe8e163 -> /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0417 19:09:21.308400  693692 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0417 19:09:21.308504  693692 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18665-688109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-545184 host does not exist
	  To start a cluster, run: "minikube start -p download-only-545184"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-545184
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-250898 --alsologtostderr --binary-mirror http://127.0.0.1:33811 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-250898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-250898
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-873604
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-873604: exit status 85 (88.410502ms)

                                                
                                                
-- stdout --
	* Profile "addons-873604" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-873604"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-873604
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-873604: exit status 85 (85.459139ms)

                                                
                                                
-- stdout --
	* Profile "addons-873604" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-873604"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (217.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-873604 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-873604 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m37.046844429s)
--- PASS: TestAddons/Setup (217.05s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 50.524276ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-hlj26" [bd1989fe-0b5a-41a4-ae03-88af2d34eb0d] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.012007681s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qwqgq" [e06390fb-d1dc-4627-80a9-02edada26c01] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004890967s
addons_test.go:340: (dbg) Run:  kubectl --context addons-873604 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-873604 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-873604 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.159102645s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 ip
2024/04/17 19:13:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x8rbb" [246bc11a-b1a3-4596-bf39-4dc3370a54e5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003910975s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-873604
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-873604: (5.795359663s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 5.300607ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-873604 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-873604 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7ed463e5-107d-4038-977e-9dddccc56a90] Pending
helpers_test.go:344: "task-pv-pod" [7ed463e5-107d-4038-977e-9dddccc56a90] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7ed463e5-107d-4038-977e-9dddccc56a90] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003419555s
addons_test.go:584: (dbg) Run:  kubectl --context addons-873604 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-873604 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-873604 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-873604 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-873604 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-873604 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-873604 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [df0e56b9-4d3d-4305-abb7-be5213986c44] Pending
helpers_test.go:344: "task-pv-pod-restore" [df0e56b9-4d3d-4305-abb7-be5213986c44] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [df0e56b9-4d3d-4305-abb7-be5213986c44] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004440957s
addons_test.go:626: (dbg) Run:  kubectl --context addons-873604 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-873604 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-873604 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-873604 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.747820889s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (45.34s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-873604 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-873604 --alsologtostderr -v=1: (1.367373698s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-4rl7c" [35fb3e81-6406-4fc5-a1d9-cf3c8db3d7cb] Pending
helpers_test.go:344: "headlamp-7559bf459f-4rl7c" [35fb3e81-6406-4fc5-a1d9-cf3c8db3d7cb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-4rl7c" [35fb3e81-6406-4fc5-a1d9-cf3c8db3d7cb] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005008861s
--- PASS: TestAddons/parallel/Headlamp (12.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-4g9kv" [2f2e67b5-30f1-43b7-b3c8-86af580c1ad6] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004273506s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-873604
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-873604 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-873604 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-873604 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3dd88c84-b9c2-4038-bdd5-420659014035] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3dd88c84-b9c2-4038-bdd5-420659014035] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3dd88c84-b9c2-4038-bdd5-420659014035] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00409338s
addons_test.go:891: (dbg) Run:  kubectl --context addons-873604 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 ssh "cat /opt/local-path-provisioner/pvc-814c2d54-9fef-4b2f-bb69-2330200001c7_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-873604 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-873604 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-873604 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-873604 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.399720164s)
--- PASS: TestAddons/parallel/LocalPath (54.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6lc6l" [88c0cead-b0d0-4699-b183-dab722233906] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004591609s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-873604
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-shv8z" [af1a11a0-3648-43d1-b8b4-2d6251943843] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004380611s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-873604 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-873604 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-873604
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-873604: (11.982303747s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-873604
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-873604
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-873604
--- PASS: TestAddons/StoppedEnableDisable (12.28s)

                                                
                                    
x
+
TestCertOptions (38.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-735372 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-735372 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.157141207s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-735372 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-735372 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-735372 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-735372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-735372
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-735372: (2.02221013s)
--- PASS: TestCertOptions (38.85s)

                                                
                                    
x
+
TestCertExpiration (237.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-669344 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-669344 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.465365259s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-669344 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-669344 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.291017851s)
helpers_test.go:175: Cleaning up "cert-expiration-669344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-669344
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-669344: (2.450697804s)
--- PASS: TestCertExpiration (237.21s)

                                                
                                    
x
+
TestForceSystemdFlag (38.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-700608 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-700608 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.526185223s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-700608 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-700608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-700608
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-700608: (2.388388495s)
--- PASS: TestForceSystemdFlag (38.23s)

                                                
                                    
x
+
TestForceSystemdEnv (46.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-768232 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-768232 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.041035112s)
helpers_test.go:175: Cleaning up "force-systemd-env-768232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-768232
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-768232: (2.55768876s)
--- PASS: TestForceSystemdEnv (46.60s)

                                                
                                    
x
+
TestErrorSpam/setup (28.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-189315 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-189315 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-189315 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-189315 --driver=docker  --container-runtime=crio: (28.346293766s)
--- PASS: TestErrorSpam/setup (28.35s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 stop: (1.245346308s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-189315 --log_dir /tmp/nospam-189315 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18665-688109/.minikube/files/etc/test/nested/copy/693518/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-532156 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-532156 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.771152556s)
--- PASS: TestFunctional/serial/StartWithProxy (79.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.33s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-532156 --alsologtostderr -v=8
E0417 19:23:01.948637  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:01.954257  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:01.964579  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:01.984950  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:02.025357  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:02.105766  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:02.266235  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:02.586820  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:03.227899  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:04.508180  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:07.069165  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:12.189550  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 19:23:22.430099  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-532156 --alsologtostderr -v=8: (38.324848206s)
functional_test.go:659: soft start took 38.325718183s for "functional-532156" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.33s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-532156 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 cache add registry.k8s.io/pause:3.1: (1.248422307s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 cache add registry.k8s.io/pause:3.3: (1.190527019s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 cache add registry.k8s.io/pause:latest: (1.148817753s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-532156 /tmp/TestFunctionalserialCacheCmdcacheadd_local989982729/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 cache add minikube-local-cache-test:functional-532156
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 cache delete minikube-local-cache-test:functional-532156
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-532156
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-532156 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (324.897581ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 cache reload: (1.082346265s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 kubectl -- --context functional-532156 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-532156 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-532156 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0417 19:23:42.910256  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-532156 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.884950063s)
functional_test.go:757: restart took 33.885073169s for "functional-532156" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-532156 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 logs: (1.674722725s)
--- PASS: TestFunctional/serial/LogsCmd (1.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 logs --file /tmp/TestFunctionalserialLogsFileCmd3067777111/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 logs --file /tmp/TestFunctionalserialLogsFileCmd3067777111/001/logs.txt: (1.702928899s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-532156 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-532156
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-532156: exit status 115 (438.054295ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31333 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-532156 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-532156 config get cpus: exit status 14 (94.584451ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-532156 config get cpus: exit status 14 (98.826777ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-532156 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-532156 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 722407: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-532156 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-532156 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (242.51999ms)

                                                
                                                
-- stdout --
	* [functional-532156] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 19:25:07.100138  721860 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:25:07.100443  721860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:25:07.100455  721860 out.go:304] Setting ErrFile to fd 2...
	I0417 19:25:07.100473  721860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:25:07.100758  721860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
	I0417 19:25:07.101148  721860 out.go:298] Setting JSON to false
	I0417 19:25:07.102106  721860 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11254,"bootTime":1713370653,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0417 19:25:07.102186  721860 start.go:139] virtualization:  
	I0417 19:25:07.116720  721860 out.go:177] * [functional-532156] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0417 19:25:07.119353  721860 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:25:07.121843  721860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:25:07.119514  721860 notify.go:220] Checking for updates...
	I0417 19:25:07.127094  721860 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	I0417 19:25:07.129358  721860 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	I0417 19:25:07.131428  721860 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0417 19:25:07.133714  721860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:25:07.136244  721860 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:25:07.136882  721860 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:25:07.161230  721860 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0417 19:25:07.161345  721860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:25:07.252090  721860 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-17 19:25:07.231148734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:25:07.252193  721860 docker.go:295] overlay module found
	I0417 19:25:07.255132  721860 out.go:177] * Using the docker driver based on existing profile
	I0417 19:25:07.257252  721860 start.go:297] selected driver: docker
	I0417 19:25:07.257275  721860 start.go:901] validating driver "docker" against &{Name:functional-532156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:functional-532156 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:25:07.257374  721860 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:25:07.260909  721860 out.go:177] 
	W0417 19:25:07.263366  721860 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0417 19:25:07.265371  721860 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-532156 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-532156 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-532156 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (256.374516ms)

                                                
                                                
-- stdout --
	* [functional-532156] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 19:25:07.691195  722015 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:25:07.691401  722015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:25:07.691431  722015 out.go:304] Setting ErrFile to fd 2...
	I0417 19:25:07.691450  722015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:25:07.691840  722015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
	I0417 19:25:07.692268  722015 out.go:298] Setting JSON to false
	I0417 19:25:07.693429  722015 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11255,"bootTime":1713370653,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0417 19:25:07.693516  722015 start.go:139] virtualization:  
	I0417 19:25:07.696069  722015 out.go:177] * [functional-532156] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	I0417 19:25:07.698537  722015 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:25:07.700748  722015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:25:07.698709  722015 notify.go:220] Checking for updates...
	I0417 19:25:07.704939  722015 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	I0417 19:25:07.708489  722015 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	I0417 19:25:07.711005  722015 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0417 19:25:07.713117  722015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:25:07.715703  722015 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:25:07.716308  722015 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:25:07.745743  722015 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0417 19:25:07.745857  722015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:25:07.838709  722015 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-17 19:25:07.826258689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:25:07.838865  722015 docker.go:295] overlay module found
	I0417 19:25:07.841382  722015 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0417 19:25:07.843636  722015 start.go:297] selected driver: docker
	I0417 19:25:07.843673  722015 start.go:901] validating driver "docker" against &{Name:functional-532156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:functional-532156 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:25:07.843846  722015 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:25:07.848593  722015 out.go:177] 
	W0417 19:25:07.850843  722015 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0417 19:25:07.855058  722015 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-532156 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-532156 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-6ghv9" [084980c9-1d92-42c2-a9e1-a84d777288b8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-6ghv9" [084980c9-1d92-42c2-a9e1-a84d777288b8] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003583309s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30782
functional_test.go:1671: http://192.168.49.2:30782: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-6ghv9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30782
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b167e823-8469-47e2-8376-2e3e093a71ff] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006969186s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-532156 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-532156 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-532156 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-532156 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bf820b81-ac5c-423d-96f3-9b6ff0b7ed57] Pending
helpers_test.go:344: "sp-pod" [bf820b81-ac5c-423d-96f3-9b6ff0b7ed57] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bf820b81-ac5c-423d-96f3-9b6ff0b7ed57] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003403046s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-532156 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-532156 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-532156 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6d12b69d-b360-480c-a1b3-f271c911fe8e] Pending
helpers_test.go:344: "sp-pod" [6d12b69d-b360-480c-a1b3-f271c911fe8e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6d12b69d-b360-480c-a1b3-f271c911fe8e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.011230664s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-532156 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh -n functional-532156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 cp functional-532156:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3494880009/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh -n functional-532156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh -n functional-532156 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/693518/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "sudo cat /etc/test/nested/copy/693518/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/693518.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "sudo cat /etc/ssl/certs/693518.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/693518.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "sudo cat /usr/share/ca-certificates/693518.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/6935182.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "sudo cat /etc/ssl/certs/6935182.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/6935182.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "sudo cat /usr/share/ca-certificates/6935182.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-532156 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-532156 ssh "sudo systemctl is-active docker": exit status 1 (328.475917ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-532156 ssh "sudo systemctl is-active containerd": exit status 1 (344.767151ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-532156 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0-rc.2
registry.k8s.io/kube-proxy:v1.30.0-rc.2
registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
registry.k8s.io/kube-apiserver:v1.30.0-rc.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-532156
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-532156 image ls --format short --alsologtostderr:
I0417 19:25:09.660738  722383 out.go:291] Setting OutFile to fd 1 ...
I0417 19:25:09.661214  722383 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 19:25:09.661229  722383 out.go:304] Setting ErrFile to fd 2...
I0417 19:25:09.661235  722383 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 19:25:09.661477  722383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
I0417 19:25:09.662122  722383 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 19:25:09.662242  722383 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 19:25:09.662762  722383 cli_runner.go:164] Run: docker container inspect functional-532156 --format={{.State.Status}}
I0417 19:25:09.689128  722383 ssh_runner.go:195] Run: systemctl --version
I0417 19:25:09.689186  722383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-532156
I0417 19:25:09.720298  722383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/functional-532156/id_rsa Username:docker}
I0417 19:25:09.845776  722383 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-532156 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/my-image                      | functional-532156  | 259d14bdc4673 | 1.64MB |
| registry.k8s.io/kube-scheduler          | v1.30.0-rc.2       | 425022910de1d | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4740c1948d3fc | 60.9MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-proxy              | v1.30.0-rc.2       | aa30953d3c2b4 | 89.1MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | latest             | 48b4217efe5ed | 196MB  |
| gcr.io/google-containers/addon-resizer  | functional-532156  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-controller-manager | v1.30.0-rc.2       | 88320cfaf308b | 108MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | alpine             | b8c82647e8a25 | 45.4MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-apiserver          | v1.30.0-rc.2       | 78b24de5c18c4 | 114MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-532156 image ls --format table --alsologtostderr:
I0417 19:25:15.775152  722805 out.go:291] Setting OutFile to fd 1 ...
I0417 19:25:15.775386  722805 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 19:25:15.775409  722805 out.go:304] Setting ErrFile to fd 2...
I0417 19:25:15.775429  722805 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 19:25:15.775706  722805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
I0417 19:25:15.776324  722805 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 19:25:15.776508  722805 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 19:25:15.777004  722805 cli_runner.go:164] Run: docker container inspect functional-532156 --format={{.State.Status}}
I0417 19:25:15.797993  722805 ssh_runner.go:195] Run: systemctl --version
I0417 19:25:15.798046  722805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-532156
I0417 19:25:15.815541  722805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/functional-532156/id_rsa Username:docker}
I0417 19:25:15.917484  722805 ssh_runner.go:195] Run: sudo crictl images --output json
2024/04/17 19:25:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-532156 image ls --format json --alsologtostderr:
[{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"88320cfaf308b507d1d1d6fa062612281320e1ca1add79c7b22b5b0a19756aa1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a1050de799d912078dea095c6f55bba3af8358da0470ba57a24e0e6d081ff5b8","registry.k8s.io/kube-controller-manager@sha256:a200e9dde0e8d0f39b3f7739ca4c65c17f76e03a2a4990dc0ba1b30831009ed8"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0-rc.2"],"size":"108229958"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb80
3c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"48b4217efe5ed7e85a8946668b6adedb8242a5433da2c53273fb4c112f4c5d99","repoDigests":["docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1","docker.io/library/nginx@sha256:fb6f7ddf4a57af3dc8acd2884def0e3a636ec198c733618e540905b4a1f9b9c6"],"repoTags":["docker.io/library/nginx:latest"],"size":"196122057"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-mi
nikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"78b24de5c18c446278f50432f209bd786ff0d05a4d09b222d1f17998ae2ce121","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3c970620191febadad70f54370480a68daa722f3ba57f63ff2a71bfacd092053","registry.k8s.io/kube-apiserver@sha256:e07faa3ea20081e196e5d85b70d2e6f566859ae4b38dbad2e3bdc0afb86c6a25"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0-rc.2"],"size":"113538528"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@
sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"aa30953d3c2b4acff6d925faf6c4af0ac0577bf606ddf8491ab14ca0cabba691","repoDigests":["registry.k8s.io/kube-proxy@sha256:0961badf165d0f1fed5c8b6e473b34d8c76a9318ae090a9071416c5731431ac5","registry.k8s.io/kube-proxy@sha256:18d3df90bc9ac9200449e65164184b9238edbf75ee84364f2fcd31b032837ea1"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0-rc.2"],"size":"89133975"},{"id":"425022910de1d4ab7b21888dfad9e8f9da04f37712dccd64347bbfd735b80657","repoDigests":["registry.k8s.io/kube-scheduler@sha256:08a79e6f8708e181c82380ee521a5eaa4a1598a00b2ca708a5f70201fb17e543","registry.k8s.io/kube-scheduler@sha256:b1c01d6de69e1c092573692d1714cafdffcf4c46f6361e320c70c9ab17269856"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0-rc.2"],"size":"61568326"},{"id":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc
4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"60940831"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"9eb85987899f80958b6885bb5868b07395c7ec92e540034418c79c599599cbae","repoDigests":["docker.io/library/7d5c3920a1a5925c7cb4a744138b85cc7d8bd6f88770aa43b78eae52542ee054-tmp@sha256:51f5713157759f4558bb678906a917d4c7dfe7aaee9835871756302c40334dc9"],"repoTags":[],"size":"1637644"},{"id":"b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0","repoDigests":["docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e
2e76742","docker.io/library/nginx@sha256:fe6e879bfe52091d423aa46efab8899ee4da7fdc7ed7baa558dcabf3823eb0d7"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45393258"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-532156"],"size":"34114467"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"259d14bdc46735a81b3c911b38459edf9db391b3d0438805ef64b1781cd15f
e4","repoDigests":["localhost/my-image@sha256:a2f88541d4da5d011ba02627c30bb837d7dffefb6d396c37f5ddf211e6e0e274"],"repoTags":["localhost/my-image:functional-532156"],"size":"1640226"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-532156 image ls --format json --alsologtostderr:
I0417 19:25:15.484184  722775 out.go:291] Setting OutFile to fd 1 ...
I0417 19:25:15.484402  722775 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 19:25:15.484426  722775 out.go:304] Setting ErrFile to fd 2...
I0417 19:25:15.484447  722775 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 19:25:15.484758  722775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
I0417 19:25:15.485426  722775 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 19:25:15.485659  722775 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 19:25:15.486216  722775 cli_runner.go:164] Run: docker container inspect functional-532156 --format={{.State.Status}}
I0417 19:25:15.503870  722775 ssh_runner.go:195] Run: systemctl --version
I0417 19:25:15.503939  722775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-532156
I0417 19:25:15.523418  722775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/functional-532156/id_rsa Username:docker}
I0417 19:25:15.629373  722775 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-532156 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 425022910de1d4ab7b21888dfad9e8f9da04f37712dccd64347bbfd735b80657
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:08a79e6f8708e181c82380ee521a5eaa4a1598a00b2ca708a5f70201fb17e543
- registry.k8s.io/kube-scheduler@sha256:b1c01d6de69e1c092573692d1714cafdffcf4c46f6361e320c70c9ab17269856
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0-rc.2
size: "61568326"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-532156
size: "34114467"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0
repoDigests:
- docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742
- docker.io/library/nginx@sha256:fe6e879bfe52091d423aa46efab8899ee4da7fdc7ed7baa558dcabf3823eb0d7
repoTags:
- docker.io/library/nginx:alpine
size: "45393258"
- id: 48b4217efe5ed7e85a8946668b6adedb8242a5433da2c53273fb4c112f4c5d99
repoDigests:
- docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1
- docker.io/library/nginx@sha256:fb6f7ddf4a57af3dc8acd2884def0e3a636ec198c733618e540905b4a1f9b9c6
repoTags:
- docker.io/library/nginx:latest
size: "196122057"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 78b24de5c18c446278f50432f209bd786ff0d05a4d09b222d1f17998ae2ce121
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3c970620191febadad70f54370480a68daa722f3ba57f63ff2a71bfacd092053
- registry.k8s.io/kube-apiserver@sha256:e07faa3ea20081e196e5d85b70d2e6f566859ae4b38dbad2e3bdc0afb86c6a25
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0-rc.2
size: "113538528"
- id: 88320cfaf308b507d1d1d6fa062612281320e1ca1add79c7b22b5b0a19756aa1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a1050de799d912078dea095c6f55bba3af8358da0470ba57a24e0e6d081ff5b8
- registry.k8s.io/kube-controller-manager@sha256:a200e9dde0e8d0f39b3f7739ca4c65c17f76e03a2a4990dc0ba1b30831009ed8
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
size: "108229958"
- id: aa30953d3c2b4acff6d925faf6c4af0ac0577bf606ddf8491ab14ca0cabba691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0961badf165d0f1fed5c8b6e473b34d8c76a9318ae090a9071416c5731431ac5
- registry.k8s.io/kube-proxy@sha256:18d3df90bc9ac9200449e65164184b9238edbf75ee84364f2fcd31b032837ea1
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0-rc.2
size: "89133975"
- id: 4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "60940831"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-532156 image ls --format yaml --alsologtostderr:
I0417 19:25:09.991889  722449 out.go:291] Setting OutFile to fd 1 ...
I0417 19:25:09.992115  722449 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 19:25:09.992145  722449 out.go:304] Setting ErrFile to fd 2...
I0417 19:25:09.992169  722449 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 19:25:09.992479  722449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
I0417 19:25:09.994340  722449 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 19:25:09.995456  722449 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 19:25:09.996137  722449 cli_runner.go:164] Run: docker container inspect functional-532156 --format={{.State.Status}}
I0417 19:25:10.033527  722449 ssh_runner.go:195] Run: systemctl --version
I0417 19:25:10.033602  722449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-532156
I0417 19:25:10.053348  722449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/functional-532156/id_rsa Username:docker}
I0417 19:25:10.158096  722449 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-532156 ssh pgrep buildkitd: exit status 1 (297.494545ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image build -t localhost/my-image:functional-532156 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 image build -t localhost/my-image:functional-532156 testdata/build --alsologtostderr: (4.578807526s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-532156 image build -t localhost/my-image:functional-532156 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9eb85987899
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-532156
--> 259d14bdc46
Successfully tagged localhost/my-image:functional-532156
259d14bdc46735a81b3c911b38459edf9db391b3d0438805ef64b1781cd15fe4
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-532156 image build -t localhost/my-image:functional-532156 testdata/build --alsologtostderr:
I0417 19:25:10.606313  722530 out.go:291] Setting OutFile to fd 1 ...
I0417 19:25:10.607066  722530 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 19:25:10.607130  722530 out.go:304] Setting ErrFile to fd 2...
I0417 19:25:10.607155  722530 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 19:25:10.607468  722530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
I0417 19:25:10.608231  722530 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 19:25:10.609079  722530 config.go:182] Loaded profile config "functional-532156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 19:25:10.609652  722530 cli_runner.go:164] Run: docker container inspect functional-532156 --format={{.State.Status}}
I0417 19:25:10.630998  722530 ssh_runner.go:195] Run: systemctl --version
I0417 19:25:10.631057  722530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-532156
I0417 19:25:10.661866  722530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/functional-532156/id_rsa Username:docker}
I0417 19:25:10.766061  722530 build_images.go:161] Building image from path: /tmp/build.3216525606.tar
I0417 19:25:10.766130  722530 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0417 19:25:10.778887  722530 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3216525606.tar
I0417 19:25:10.783595  722530 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3216525606.tar: stat -c "%s %y" /var/lib/minikube/build/build.3216525606.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3216525606.tar': No such file or directory
I0417 19:25:10.783625  722530 ssh_runner.go:362] scp /tmp/build.3216525606.tar --> /var/lib/minikube/build/build.3216525606.tar (3072 bytes)
I0417 19:25:10.836885  722530 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3216525606
I0417 19:25:10.848649  722530 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3216525606 -xf /var/lib/minikube/build/build.3216525606.tar
I0417 19:25:10.861938  722530 crio.go:315] Building image: /var/lib/minikube/build/build.3216525606
I0417 19:25:10.862023  722530 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-532156 /var/lib/minikube/build/build.3216525606 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0417 19:25:15.049737  722530 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-532156 /var/lib/minikube/build/build.3216525606 --cgroup-manager=cgroupfs: (4.1876785s)
I0417 19:25:15.049826  722530 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3216525606
I0417 19:25:15.061880  722530 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3216525606.tar
I0417 19:25:15.074216  722530 build_images.go:217] Built localhost/my-image:functional-532156 from /tmp/build.3216525606.tar
I0417 19:25:15.074252  722530 build_images.go:133] succeeded building to: functional-532156
I0417 19:25:15.074258  722530 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.512813193s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-532156
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image load --daemon gcr.io/google-containers/addon-resizer:functional-532156 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 image load --daemon gcr.io/google-containers/addon-resizer:functional-532156 --alsologtostderr: (5.409216604s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-532156 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-532156 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-g2d9n" [7b5268da-82f9-41db-bd02-02969d8b1af9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0417 19:24:23.870687  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
helpers_test.go:344: "hello-node-65f5d5cc78-g2d9n" [7b5268da-82f9-41db-bd02-02969d8b1af9] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004257165s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image load --daemon gcr.io/google-containers/addon-resizer:functional-532156 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 image load --daemon gcr.io/google-containers/addon-resizer:functional-532156 --alsologtostderr: (2.809768045s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.767744524s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-532156
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image load --daemon gcr.io/google-containers/addon-resizer:functional-532156 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 image load --daemon gcr.io/google-containers/addon-resizer:functional-532156 --alsologtostderr: (4.575436155s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 service list -o json
functional_test.go:1490: Took "421.098388ms" to run "out/minikube-linux-arm64 -p functional-532156 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30939
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30939
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-532156 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-532156 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-532156 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-532156 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 719211: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-532156 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-532156 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e217abbe-a9ea-4be5-bb3a-4255a02da21c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e217abbe-a9ea-4be5-bb3a-4255a02da21c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.005026126s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image save gcr.io/google-containers/addon-resizer:functional-532156 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 image save gcr.io/google-containers/addon-resizer:functional-532156 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.071560082s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image rm gcr.io/google-containers/addon-resizer:functional-532156 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-532156 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.319827046s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-532156
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 image save --daemon gcr.io/google-containers/addon-resizer:functional-532156 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-532156
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-532156 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.132.125 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-532156 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "327.851102ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "67.805809ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "333.76291ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "63.214864ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-532156 /tmp/TestFunctionalparallelMountCmdany-port1588133316/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713381896286762485" to /tmp/TestFunctionalparallelMountCmdany-port1588133316/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713381896286762485" to /tmp/TestFunctionalparallelMountCmdany-port1588133316/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713381896286762485" to /tmp/TestFunctionalparallelMountCmdany-port1588133316/001/test-1713381896286762485
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-532156 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.052003ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 17 19:24 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 17 19:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 17 19:24 test-1713381896286762485
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh cat /mount-9p/test-1713381896286762485
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-532156 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3dc1c572-66d9-4320-8306-882545bc40c7] Pending
helpers_test.go:344: "busybox-mount" [3dc1c572-66d9-4320-8306-882545bc40c7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3dc1c572-66d9-4320-8306-882545bc40c7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3dc1c572-66d9-4320-8306-882545bc40c7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003785709s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-532156 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-532156 /tmp/TestFunctionalparallelMountCmdany-port1588133316/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-532156 /tmp/TestFunctionalparallelMountCmdspecific-port2231391802/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-532156 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (359.818179ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-532156 /tmp/TestFunctionalparallelMountCmdspecific-port2231391802/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-532156 ssh "sudo umount -f /mount-9p": exit status 1 (284.905005ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-532156 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-532156 /tmp/TestFunctionalparallelMountCmdspecific-port2231391802/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-532156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup141427027/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-532156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup141427027/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-532156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup141427027/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-532156 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-532156 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-532156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup141427027/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-532156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup141427027/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-532156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup141427027/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-532156
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-532156
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-532156
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (158.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-615987 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0417 19:25:45.791502  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-615987 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m37.860747999s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (158.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- rollout status deployment/busybox
E0417 19:28:01.948532  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-615987 -- rollout status deployment/busybox: (4.966325126s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-4tknl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-nrbsc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-xdp6x -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-4tknl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-nrbsc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-xdp6x -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-4tknl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-nrbsc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-xdp6x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-4tknl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-4tknl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-nrbsc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-nrbsc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-xdp6x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-615987 -- exec busybox-fc5497c4f-xdp6x -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-615987 -v=7 --alsologtostderr
E0417 19:28:29.632507  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-615987 -v=7 --alsologtostderr: (54.863222316s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr: (1.028937611s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-615987 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-615987 status --output json -v=7 --alsologtostderr: (1.047331151s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp testdata/cp-test.txt ha-615987:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile6907267/001/cp-test_ha-615987.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987:/home/docker/cp-test.txt ha-615987-m02:/home/docker/cp-test_ha-615987_ha-615987-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m02 "sudo cat /home/docker/cp-test_ha-615987_ha-615987-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987:/home/docker/cp-test.txt ha-615987-m03:/home/docker/cp-test_ha-615987_ha-615987-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m03 "sudo cat /home/docker/cp-test_ha-615987_ha-615987-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987:/home/docker/cp-test.txt ha-615987-m04:/home/docker/cp-test_ha-615987_ha-615987-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m04 "sudo cat /home/docker/cp-test_ha-615987_ha-615987-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp testdata/cp-test.txt ha-615987-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile6907267/001/cp-test_ha-615987-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m02:/home/docker/cp-test.txt ha-615987:/home/docker/cp-test_ha-615987-m02_ha-615987.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987 "sudo cat /home/docker/cp-test_ha-615987-m02_ha-615987.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m02:/home/docker/cp-test.txt ha-615987-m03:/home/docker/cp-test_ha-615987-m02_ha-615987-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m03 "sudo cat /home/docker/cp-test_ha-615987-m02_ha-615987-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m02:/home/docker/cp-test.txt ha-615987-m04:/home/docker/cp-test_ha-615987-m02_ha-615987-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m04 "sudo cat /home/docker/cp-test_ha-615987-m02_ha-615987-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp testdata/cp-test.txt ha-615987-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile6907267/001/cp-test_ha-615987-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m03:/home/docker/cp-test.txt ha-615987:/home/docker/cp-test_ha-615987-m03_ha-615987.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987 "sudo cat /home/docker/cp-test_ha-615987-m03_ha-615987.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m03:/home/docker/cp-test.txt ha-615987-m02:/home/docker/cp-test_ha-615987-m03_ha-615987-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m02 "sudo cat /home/docker/cp-test_ha-615987-m03_ha-615987-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m03:/home/docker/cp-test.txt ha-615987-m04:/home/docker/cp-test_ha-615987-m03_ha-615987-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m04 "sudo cat /home/docker/cp-test_ha-615987-m03_ha-615987-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp testdata/cp-test.txt ha-615987-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile6907267/001/cp-test_ha-615987-m04.txt
E0417 19:29:20.883577  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
E0417 19:29:20.888844  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
E0417 19:29:20.899379  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
E0417 19:29:20.919755  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
E0417 19:29:20.959932  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
E0417 19:29:21.040134  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m04 "sudo cat /home/docker/cp-test.txt"
E0417 19:29:21.200778  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m04:/home/docker/cp-test.txt ha-615987:/home/docker/cp-test_ha-615987-m04_ha-615987.txt
E0417 19:29:21.521088  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m04 "sudo cat /home/docker/cp-test.txt"
E0417 19:29:22.162263  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987 "sudo cat /home/docker/cp-test_ha-615987-m04_ha-615987.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m04:/home/docker/cp-test.txt ha-615987-m02:/home/docker/cp-test_ha-615987-m04_ha-615987-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m02 "sudo cat /home/docker/cp-test_ha-615987-m04_ha-615987-m02.txt"
E0417 19:29:23.442935  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 cp ha-615987-m04:/home/docker/cp-test.txt ha-615987-m03:/home/docker/cp-test_ha-615987-m04_ha-615987-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 ssh -n ha-615987-m03 "sudo cat /home/docker/cp-test_ha-615987-m04_ha-615987-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 node stop m02 -v=7 --alsologtostderr
E0417 19:29:26.003483  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
E0417 19:29:31.124150  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-615987 node stop m02 -v=7 --alsologtostderr: (11.985018914s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr: exit status 7 (764.93673ms)

                                                
                                                
-- stdout --
	ha-615987
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-615987-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-615987-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-615987-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 19:29:36.709055  737547 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:29:36.709260  737547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:29:36.709289  737547 out.go:304] Setting ErrFile to fd 2...
	I0417 19:29:36.709307  737547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:29:36.709590  737547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
	I0417 19:29:36.710638  737547 out.go:298] Setting JSON to false
	I0417 19:29:36.710730  737547 mustload.go:65] Loading cluster: ha-615987
	I0417 19:29:36.710853  737547 notify.go:220] Checking for updates...
	I0417 19:29:36.711289  737547 config.go:182] Loaded profile config "ha-615987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:29:36.711338  737547 status.go:255] checking status of ha-615987 ...
	I0417 19:29:36.711980  737547 cli_runner.go:164] Run: docker container inspect ha-615987 --format={{.State.Status}}
	I0417 19:29:36.730869  737547 status.go:330] ha-615987 host status = "Running" (err=<nil>)
	I0417 19:29:36.730893  737547 host.go:66] Checking if "ha-615987" exists ...
	I0417 19:29:36.731212  737547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-615987
	I0417 19:29:36.748050  737547 host.go:66] Checking if "ha-615987" exists ...
	I0417 19:29:36.748516  737547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:29:36.748584  737547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-615987
	I0417 19:29:36.773740  737547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/ha-615987/id_rsa Username:docker}
	I0417 19:29:36.870567  737547 ssh_runner.go:195] Run: systemctl --version
	I0417 19:29:36.876938  737547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:29:36.892761  737547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:29:36.961930  737547 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:72 SystemTime:2024-04-17 19:29:36.951825449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:29:36.962633  737547 kubeconfig.go:125] found "ha-615987" server: "https://192.168.49.254:8443"
	I0417 19:29:36.962668  737547 api_server.go:166] Checking apiserver status ...
	I0417 19:29:36.962716  737547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:29:36.975025  737547 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	I0417 19:29:36.985765  737547 api_server.go:182] apiserver freezer: "5:freezer:/docker/b714a6fe9683093adf513313c2e37132745bebcdcf86cc2d8297997616205c76/crio/crio-6345525c93357d262b9f2d90311270cc905d1d7644bf525e24b1bc17756910d6"
	I0417 19:29:36.985848  737547 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b714a6fe9683093adf513313c2e37132745bebcdcf86cc2d8297997616205c76/crio/crio-6345525c93357d262b9f2d90311270cc905d1d7644bf525e24b1bc17756910d6/freezer.state
	I0417 19:29:36.995457  737547 api_server.go:204] freezer state: "THAWED"
	I0417 19:29:36.995494  737547 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0417 19:29:37.006189  737547 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0417 19:29:37.006229  737547 status.go:422] ha-615987 apiserver status = Running (err=<nil>)
	I0417 19:29:37.006244  737547 status.go:257] ha-615987 status: &{Name:ha-615987 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:29:37.006265  737547 status.go:255] checking status of ha-615987-m02 ...
	I0417 19:29:37.006613  737547 cli_runner.go:164] Run: docker container inspect ha-615987-m02 --format={{.State.Status}}
	I0417 19:29:37.046804  737547 status.go:330] ha-615987-m02 host status = "Stopped" (err=<nil>)
	I0417 19:29:37.046828  737547 status.go:343] host is not running, skipping remaining checks
	I0417 19:29:37.046836  737547 status.go:257] ha-615987-m02 status: &{Name:ha-615987-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:29:37.046859  737547 status.go:255] checking status of ha-615987-m03 ...
	I0417 19:29:37.047168  737547 cli_runner.go:164] Run: docker container inspect ha-615987-m03 --format={{.State.Status}}
	I0417 19:29:37.064184  737547 status.go:330] ha-615987-m03 host status = "Running" (err=<nil>)
	I0417 19:29:37.064238  737547 host.go:66] Checking if "ha-615987-m03" exists ...
	I0417 19:29:37.064669  737547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-615987-m03
	I0417 19:29:37.080968  737547 host.go:66] Checking if "ha-615987-m03" exists ...
	I0417 19:29:37.081308  737547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:29:37.081361  737547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-615987-m03
	I0417 19:29:37.097929  737547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/ha-615987-m03/id_rsa Username:docker}
	I0417 19:29:37.193797  737547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:29:37.206598  737547 kubeconfig.go:125] found "ha-615987" server: "https://192.168.49.254:8443"
	I0417 19:29:37.206626  737547 api_server.go:166] Checking apiserver status ...
	I0417 19:29:37.206669  737547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:29:37.217718  737547 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1338/cgroup
	I0417 19:29:37.227523  737547 api_server.go:182] apiserver freezer: "5:freezer:/docker/9b66e214075f9179370f170af0402de9b770c1337327ee712f8c52e6c1bbf11c/crio/crio-aaf62f7b066abcdb2a7fe952b63c8fac59fc23fb8d386988a1f887bd1853c5d7"
	I0417 19:29:37.227598  737547 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9b66e214075f9179370f170af0402de9b770c1337327ee712f8c52e6c1bbf11c/crio/crio-aaf62f7b066abcdb2a7fe952b63c8fac59fc23fb8d386988a1f887bd1853c5d7/freezer.state
	I0417 19:29:37.237350  737547 api_server.go:204] freezer state: "THAWED"
	I0417 19:29:37.237424  737547 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0417 19:29:37.246175  737547 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0417 19:29:37.246212  737547 status.go:422] ha-615987-m03 apiserver status = Running (err=<nil>)
	I0417 19:29:37.246247  737547 status.go:257] ha-615987-m03 status: &{Name:ha-615987-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:29:37.246277  737547 status.go:255] checking status of ha-615987-m04 ...
	I0417 19:29:37.246714  737547 cli_runner.go:164] Run: docker container inspect ha-615987-m04 --format={{.State.Status}}
	I0417 19:29:37.261103  737547 status.go:330] ha-615987-m04 host status = "Running" (err=<nil>)
	I0417 19:29:37.261128  737547 host.go:66] Checking if "ha-615987-m04" exists ...
	I0417 19:29:37.261432  737547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-615987-m04
	I0417 19:29:37.276761  737547 host.go:66] Checking if "ha-615987-m04" exists ...
	I0417 19:29:37.277117  737547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:29:37.277221  737547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-615987-m04
	I0417 19:29:37.293136  737547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33572 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/ha-615987-m04/id_rsa Username:docker}
	I0417 19:29:37.389831  737547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:29:37.401830  737547 status.go:257] ha-615987-m04 status: &{Name:ha-615987-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 node start m02 -v=7 --alsologtostderr
E0417 19:29:41.364482  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
E0417 19:30:01.844731  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-615987 node start m02 -v=7 --alsologtostderr: (34.258233236s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr: (1.294225854s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (4.588905389s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (192.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-615987 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-615987 -v=7 --alsologtostderr
E0417 19:30:42.805588  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-615987 -v=7 --alsologtostderr: (36.853061889s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-615987 --wait=true -v=7 --alsologtostderr
E0417 19:32:04.725840  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
E0417 19:33:01.949490  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-615987 --wait=true -v=7 --alsologtostderr: (2m35.582941081s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-615987
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (192.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-615987 node delete m03 -v=7 --alsologtostderr: (12.041508072s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-615987 stop -v=7 --alsologtostderr: (35.608961661s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr: exit status 7 (108.013941ms)

                                                
                                                
-- stdout --
	ha-615987
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-615987-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-615987-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 19:34:20.088204  751717 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:34:20.088400  751717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:34:20.088409  751717 out.go:304] Setting ErrFile to fd 2...
	I0417 19:34:20.088415  751717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:34:20.088702  751717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
	I0417 19:34:20.088905  751717 out.go:298] Setting JSON to false
	I0417 19:34:20.088942  751717 mustload.go:65] Loading cluster: ha-615987
	I0417 19:34:20.089084  751717 notify.go:220] Checking for updates...
	I0417 19:34:20.089439  751717 config.go:182] Loaded profile config "ha-615987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:34:20.089451  751717 status.go:255] checking status of ha-615987 ...
	I0417 19:34:20.090030  751717 cli_runner.go:164] Run: docker container inspect ha-615987 --format={{.State.Status}}
	I0417 19:34:20.107490  751717 status.go:330] ha-615987 host status = "Stopped" (err=<nil>)
	I0417 19:34:20.107528  751717 status.go:343] host is not running, skipping remaining checks
	I0417 19:34:20.107538  751717 status.go:257] ha-615987 status: &{Name:ha-615987 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:34:20.107571  751717 status.go:255] checking status of ha-615987-m02 ...
	I0417 19:34:20.108107  751717 cli_runner.go:164] Run: docker container inspect ha-615987-m02 --format={{.State.Status}}
	I0417 19:34:20.124367  751717 status.go:330] ha-615987-m02 host status = "Stopped" (err=<nil>)
	I0417 19:34:20.124413  751717 status.go:343] host is not running, skipping remaining checks
	I0417 19:34:20.124422  751717 status.go:257] ha-615987-m02 status: &{Name:ha-615987-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:34:20.124445  751717 status.go:255] checking status of ha-615987-m04 ...
	I0417 19:34:20.124774  751717 cli_runner.go:164] Run: docker container inspect ha-615987-m04 --format={{.State.Status}}
	I0417 19:34:20.139744  751717 status.go:330] ha-615987-m04 host status = "Stopped" (err=<nil>)
	I0417 19:34:20.139768  751717 status.go:343] host is not running, skipping remaining checks
	I0417 19:34:20.139793  751717 status.go:257] ha-615987-m04 status: &{Name:ha-615987-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (74.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-615987 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0417 19:34:20.883411  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
E0417 19:34:48.566071  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-615987 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.519828507s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (74.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (60.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-615987 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-615987 --control-plane -v=7 --alsologtostderr: (59.846505647s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-615987 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (60.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-924455 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-924455 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (52.723791495s)
--- PASS: TestJSONOutput/start/Command (52.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-924455 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-924455 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-924455 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-924455 --output=json --user=testUser: (5.843044798s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-951232 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-951232 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.591936ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"68831976-7546-43d6-8c3f-5e51df53a694","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-951232] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1de1717-d885-4ea2-ad65-e634d66002e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18665"}}
	{"specversion":"1.0","id":"62cc2d90-66f3-4779-b60d-ae0db04446d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"acff9763-4532-4287-8f42-2da29ee49cb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig"}}
	{"specversion":"1.0","id":"c3d9fde5-b97a-493f-96c0-52191b4b2a53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube"}}
	{"specversion":"1.0","id":"a16d6b19-fd53-47c3-a8d9-82fc05d69028","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7a09d9ee-4f39-4b3f-a430-e7f8c0fe887f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cd647285-2b24-4ba0-b8c7-fbba1cb168ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-951232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-951232
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-279832 --network=
E0417 19:38:01.951582  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-279832 --network=: (36.712654315s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-279832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-279832
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-279832: (2.055042613s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.58s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-287117 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-287117 --network=bridge: (31.252138769s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-287117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-287117
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-287117: (2.300347054s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.58s)

                                                
                                    
x
+
TestKicExistingNetwork (36.13s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-819553 --network=existing-network
E0417 19:39:20.883829  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
E0417 19:39:24.992791  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-819553 --network=existing-network: (34.027693897s)
helpers_test.go:175: Cleaning up "existing-network-819553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-819553
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-819553: (1.953412123s)
--- PASS: TestKicExistingNetwork (36.13s)

                                                
                                    
x
+
TestKicCustomSubnet (31.35s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-243504 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-243504 --subnet=192.168.60.0/24: (29.216963672s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-243504 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-243504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-243504
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-243504: (2.105439504s)
--- PASS: TestKicCustomSubnet (31.35s)

                                                
                                    
x
+
TestKicStaticIP (31.86s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-811996 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-811996 --static-ip=192.168.200.200: (29.645083981s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-811996 ip
helpers_test.go:175: Cleaning up "static-ip-811996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-811996
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-811996: (2.055386052s)
--- PASS: TestKicStaticIP (31.86s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (66.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-183986 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-183986 --driver=docker  --container-runtime=crio: (27.801788089s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-186620 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-186620 --driver=docker  --container-runtime=crio: (33.101530414s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-183986
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-186620
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-186620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-186620
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-186620: (1.929987218s)
helpers_test.go:175: Cleaning up "first-183986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-183986
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-183986: (1.934239434s)
--- PASS: TestMinikubeProfile (66.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-860714 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-860714 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.519646456s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-860714 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-874350 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-874350 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.308403023s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-874350 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-860714 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-860714 --alsologtostderr -v=5: (1.605166314s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-874350 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-874350
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-874350: (1.203151524s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-874350
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-874350: (6.735467864s)
--- PASS: TestMountStart/serial/RestartStopped (7.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-874350 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-591546 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0417 19:43:01.948585  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-591546 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m31.847678349s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-591546 -- rollout status deployment/busybox: (2.739946937s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- exec busybox-fc5497c4f-dnnj8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- exec busybox-fc5497c4f-rrl2c -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- exec busybox-fc5497c4f-dnnj8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- exec busybox-fc5497c4f-rrl2c -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- exec busybox-fc5497c4f-dnnj8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- exec busybox-fc5497c4f-rrl2c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- exec busybox-fc5497c4f-dnnj8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- exec busybox-fc5497c4f-dnnj8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- exec busybox-fc5497c4f-rrl2c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591546 -- exec busybox-fc5497c4f-rrl2c -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-591546 -v 3 --alsologtostderr
E0417 19:44:20.883991  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-591546 -v 3 --alsologtostderr: (47.108172777s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.84s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-591546 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp testdata/cp-test.txt multinode-591546:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp multinode-591546:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile434472070/001/cp-test_multinode-591546.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp multinode-591546:/home/docker/cp-test.txt multinode-591546-m02:/home/docker/cp-test_multinode-591546_multinode-591546-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m02 "sudo cat /home/docker/cp-test_multinode-591546_multinode-591546-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp multinode-591546:/home/docker/cp-test.txt multinode-591546-m03:/home/docker/cp-test_multinode-591546_multinode-591546-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m03 "sudo cat /home/docker/cp-test_multinode-591546_multinode-591546-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp testdata/cp-test.txt multinode-591546-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp multinode-591546-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile434472070/001/cp-test_multinode-591546-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp multinode-591546-m02:/home/docker/cp-test.txt multinode-591546:/home/docker/cp-test_multinode-591546-m02_multinode-591546.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546 "sudo cat /home/docker/cp-test_multinode-591546-m02_multinode-591546.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp multinode-591546-m02:/home/docker/cp-test.txt multinode-591546-m03:/home/docker/cp-test_multinode-591546-m02_multinode-591546-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m03 "sudo cat /home/docker/cp-test_multinode-591546-m02_multinode-591546-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp testdata/cp-test.txt multinode-591546-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp multinode-591546-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile434472070/001/cp-test_multinode-591546-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp multinode-591546-m03:/home/docker/cp-test.txt multinode-591546:/home/docker/cp-test_multinode-591546-m03_multinode-591546.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546 "sudo cat /home/docker/cp-test_multinode-591546-m03_multinode-591546.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 cp multinode-591546-m03:/home/docker/cp-test.txt multinode-591546-m02:/home/docker/cp-test_multinode-591546-m03_multinode-591546-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 ssh -n multinode-591546-m02 "sudo cat /home/docker/cp-test_multinode-591546-m03_multinode-591546-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-591546 node stop m03: (1.216129213s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-591546 status: exit status 7 (512.235561ms)

                                                
                                                
-- stdout --
	multinode-591546
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-591546-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-591546-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-591546 status --alsologtostderr: exit status 7 (526.484364ms)

                                                
                                                
-- stdout --
	multinode-591546
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-591546-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-591546-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 19:44:55.052539  801613 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:44:55.052791  801613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:44:55.052823  801613 out.go:304] Setting ErrFile to fd 2...
	I0417 19:44:55.052842  801613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:44:55.053167  801613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
	I0417 19:44:55.053429  801613 out.go:298] Setting JSON to false
	I0417 19:44:55.053505  801613 mustload.go:65] Loading cluster: multinode-591546
	I0417 19:44:55.053637  801613 notify.go:220] Checking for updates...
	I0417 19:44:55.054163  801613 config.go:182] Loaded profile config "multinode-591546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:44:55.054207  801613 status.go:255] checking status of multinode-591546 ...
	I0417 19:44:55.054828  801613 cli_runner.go:164] Run: docker container inspect multinode-591546 --format={{.State.Status}}
	I0417 19:44:55.072468  801613 status.go:330] multinode-591546 host status = "Running" (err=<nil>)
	I0417 19:44:55.072494  801613 host.go:66] Checking if "multinode-591546" exists ...
	I0417 19:44:55.072906  801613 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-591546
	I0417 19:44:55.090388  801613 host.go:66] Checking if "multinode-591546" exists ...
	I0417 19:44:55.090705  801613 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:44:55.090758  801613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-591546
	I0417 19:44:55.109459  801613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33677 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/multinode-591546/id_rsa Username:docker}
	I0417 19:44:55.213851  801613 ssh_runner.go:195] Run: systemctl --version
	I0417 19:44:55.218595  801613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:44:55.229818  801613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 19:44:55.284625  801613 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-17 19:44:55.275135082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 19:44:55.285231  801613 kubeconfig.go:125] found "multinode-591546" server: "https://192.168.67.2:8443"
	I0417 19:44:55.285263  801613 api_server.go:166] Checking apiserver status ...
	I0417 19:44:55.285306  801613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:44:55.295765  801613 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1408/cgroup
	I0417 19:44:55.304883  801613 api_server.go:182] apiserver freezer: "5:freezer:/docker/fa13aa9a39a2ad0d5813626213239563142ab1a82780406cb0e6e604e92fdd2d/crio/crio-41243bff2359feff0f610132040ed9080d0d68382b3678be9fe343d3d983fdd9"
	I0417 19:44:55.304960  801613 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fa13aa9a39a2ad0d5813626213239563142ab1a82780406cb0e6e604e92fdd2d/crio/crio-41243bff2359feff0f610132040ed9080d0d68382b3678be9fe343d3d983fdd9/freezer.state
	I0417 19:44:55.313440  801613 api_server.go:204] freezer state: "THAWED"
	I0417 19:44:55.313476  801613 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0417 19:44:55.321231  801613 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0417 19:44:55.321261  801613 status.go:422] multinode-591546 apiserver status = Running (err=<nil>)
	I0417 19:44:55.321274  801613 status.go:257] multinode-591546 status: &{Name:multinode-591546 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:44:55.321290  801613 status.go:255] checking status of multinode-591546-m02 ...
	I0417 19:44:55.321606  801613 cli_runner.go:164] Run: docker container inspect multinode-591546-m02 --format={{.State.Status}}
	I0417 19:44:55.336233  801613 status.go:330] multinode-591546-m02 host status = "Running" (err=<nil>)
	I0417 19:44:55.336270  801613 host.go:66] Checking if "multinode-591546-m02" exists ...
	I0417 19:44:55.336601  801613 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-591546-m02
	I0417 19:44:55.352814  801613 host.go:66] Checking if "multinode-591546-m02" exists ...
	I0417 19:44:55.353170  801613 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:44:55.353217  801613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-591546-m02
	I0417 19:44:55.369024  801613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33682 SSHKeyPath:/home/jenkins/minikube-integration/18665-688109/.minikube/machines/multinode-591546-m02/id_rsa Username:docker}
	I0417 19:44:55.465971  801613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:44:55.479456  801613 status.go:257] multinode-591546-m02 status: &{Name:multinode-591546-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:44:55.479492  801613 status.go:255] checking status of multinode-591546-m03 ...
	I0417 19:44:55.479812  801613 cli_runner.go:164] Run: docker container inspect multinode-591546-m03 --format={{.State.Status}}
	I0417 19:44:55.494822  801613 status.go:330] multinode-591546-m03 host status = "Stopped" (err=<nil>)
	I0417 19:44:55.494848  801613 status.go:343] host is not running, skipping remaining checks
	I0417 19:44:55.494855  801613 status.go:257] multinode-591546-m03 status: &{Name:multinode-591546-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-591546 node start m03 -v=7 --alsologtostderr: (9.679917895s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-591546
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-591546
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-591546: (24.927038162s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-591546 --wait=true -v=8 --alsologtostderr
E0417 19:45:43.926579  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-591546 --wait=true -v=8 --alsologtostderr: (1m3.476606003s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-591546
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-591546 node delete m03: (4.575075474s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-591546 stop: (23.636141477s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-591546 status: exit status 7 (95.523751ms)

                                                
                                                
-- stdout --
	multinode-591546
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-591546-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-591546 status --alsologtostderr: exit status 7 (98.27973ms)

                                                
                                                
-- stdout --
	multinode-591546
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-591546-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 19:47:03.515393  808721 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:47:03.515560  808721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:47:03.515571  808721 out.go:304] Setting ErrFile to fd 2...
	I0417 19:47:03.515577  808721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:47:03.515848  808721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
	I0417 19:47:03.516035  808721 out.go:298] Setting JSON to false
	I0417 19:47:03.516071  808721 mustload.go:65] Loading cluster: multinode-591546
	I0417 19:47:03.516174  808721 notify.go:220] Checking for updates...
	I0417 19:47:03.516523  808721 config.go:182] Loaded profile config "multinode-591546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:47:03.516546  808721 status.go:255] checking status of multinode-591546 ...
	I0417 19:47:03.517083  808721 cli_runner.go:164] Run: docker container inspect multinode-591546 --format={{.State.Status}}
	I0417 19:47:03.534740  808721 status.go:330] multinode-591546 host status = "Stopped" (err=<nil>)
	I0417 19:47:03.534768  808721 status.go:343] host is not running, skipping remaining checks
	I0417 19:47:03.534775  808721 status.go:257] multinode-591546 status: &{Name:multinode-591546 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:47:03.534799  808721 status.go:255] checking status of multinode-591546-m02 ...
	I0417 19:47:03.535107  808721 cli_runner.go:164] Run: docker container inspect multinode-591546-m02 --format={{.State.Status}}
	I0417 19:47:03.551667  808721 status.go:330] multinode-591546-m02 host status = "Stopped" (err=<nil>)
	I0417 19:47:03.551691  808721 status.go:343] host is not running, skipping remaining checks
	I0417 19:47:03.551698  808721 status.go:257] multinode-591546-m02 status: &{Name:multinode-591546-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-591546 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0417 19:48:01.948299  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-591546 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m0.285988708s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591546 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (61.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-591546
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-591546-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-591546-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.752648ms)

                                                
                                                
-- stdout --
	* [multinode-591546-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-591546-m02' is duplicated with machine name 'multinode-591546-m02' in profile 'multinode-591546'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-591546-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-591546-m03 --driver=docker  --container-runtime=crio: (31.651311999s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-591546
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-591546: exit status 80 (348.101638ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-591546 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-591546-m03 already exists in multinode-591546-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-591546-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-591546-m03: (1.968133394s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.12s)

                                                
                                    
x
+
TestPreload (114.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-055688 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0417 19:49:20.883800  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-055688 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m20.778953098s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-055688 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-055688 image pull gcr.io/k8s-minikube/busybox: (1.882866242s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-055688
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-055688: (5.802251156s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-055688 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-055688 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (22.783097754s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-055688 image list
helpers_test.go:175: Cleaning up "test-preload-055688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-055688
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-055688: (2.586358988s)
--- PASS: TestPreload (114.13s)

                                                
                                    
x
+
TestScheduledStopUnix (105.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-931258 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-931258 --memory=2048 --driver=docker  --container-runtime=crio: (29.368736383s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-931258 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-931258 -n scheduled-stop-931258
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-931258 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-931258 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-931258 -n scheduled-stop-931258
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-931258
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-931258 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-931258
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-931258: exit status 7 (69.103269ms)

                                                
                                                
-- stdout --
	scheduled-stop-931258
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-931258 -n scheduled-stop-931258
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-931258 -n scheduled-stop-931258: exit status 7 (69.534651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-931258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-931258
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-931258: (4.3950935s)
--- PASS: TestScheduledStopUnix (105.36s)

                                                
                                    
x
+
TestInsufficientStorage (11.04s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-641681 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-641681 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.56797911s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"657c3341-1b90-480f-8269-ce67e6b0a0a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-641681] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"22e92157-3953-4c94-a4b9-4ecf0e5f046c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18665"}}
	{"specversion":"1.0","id":"2a1f1258-0027-4164-9f54-d6fcf863f62b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0269d504-2ee3-476c-aa84-7bb8b00d2c93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig"}}
	{"specversion":"1.0","id":"197a85bf-d90a-4dee-af5e-5aede3d7f096","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube"}}
	{"specversion":"1.0","id":"8427f180-bb54-4c6f-a46c-6d0717f83119","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2aa02f14-1b27-4fe5-990b-dda6ebf43c52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9bd6fcad-eabf-4486-b615-ea49aa1bc689","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3946059d-5df1-4f37-b3a5-2a427056cf76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cc403611-10a5-4a9e-835c-6da92b3f71f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcfc81c2-c4bb-438d-94ae-6d81e12b9cc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5907b551-1dab-49ce-a3d0-0dd9c4a54f6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-641681\" primary control-plane node in \"insufficient-storage-641681\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fdaa86d5-0599-42a4-8ac6-91de1df1f31d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1713236840-18649 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f783abde-1053-4821-9500-6590cf9adc36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"44fdc7da-f7fe-46b8-bb5a-d02f58b26ae5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-641681 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-641681 --output=json --layout=cluster: exit status 7 (292.408467ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-641681","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-641681","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0417 19:52:31.075146  825385 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-641681" does not appear in /home/jenkins/minikube-integration/18665-688109/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-641681 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-641681 --output=json --layout=cluster: exit status 7 (290.596342ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-641681","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-641681","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0417 19:52:31.370122  825440 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-641681" does not appear in /home/jenkins/minikube-integration/18665-688109/kubeconfig
	E0417 19:52:31.380918  825440 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/insufficient-storage-641681/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-641681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-641681
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-641681: (1.888982539s)
--- PASS: TestInsufficientStorage (11.04s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (92.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4169040573 start -p running-upgrade-735452 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4169040573 start -p running-upgrade-735452 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.246213525s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-735452 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-735452 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.402719145s)
helpers_test.go:175: Cleaning up "running-upgrade-735452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-735452
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-735452: (2.181927447s)
--- PASS: TestRunningBinaryUpgrade (92.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (393.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-033757 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0417 19:54:20.883722  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-033757 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.149527261s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-033757
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-033757: (1.334703804s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-033757 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-033757 status --format={{.Host}}: exit status 7 (101.621556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-033757 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-033757 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.949719841s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-033757 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-033757 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-033757 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (128.842145ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-033757] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-033757
	    minikube start -p kubernetes-upgrade-033757 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0337572 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-033757 --kubernetes-version=v1.30.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-033757 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-033757 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.009790022s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-033757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-033757
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-033757: (2.578673356s)
--- PASS: TestKubernetesUpgrade (393.37s)

                                                
                                    
x
+
TestMissingContainerUpgrade (147.6s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2551993346 start -p missing-upgrade-652824 --memory=2200 --driver=docker  --container-runtime=crio
E0417 19:53:01.948049  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2551993346 start -p missing-upgrade-652824 --memory=2200 --driver=docker  --container-runtime=crio: (1m12.344984618s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-652824
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-652824: (10.427960948s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-652824
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-652824 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-652824 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m1.21714422s)
helpers_test.go:175: Cleaning up "missing-upgrade-652824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-652824
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-652824: (2.376071693s)
--- PASS: TestMissingContainerUpgrade (147.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-892646 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-892646 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (92.896259ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-892646] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-892646 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-892646 --driver=docker  --container-runtime=crio: (41.306681224s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-892646 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-892646 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-892646 --no-kubernetes --driver=docker  --container-runtime=crio: (23.727884196s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-892646 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-892646 status -o json: exit status 2 (494.264405ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-892646","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-892646
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-892646: (1.981369018s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-892646 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-892646 --no-kubernetes --driver=docker  --container-runtime=crio: (8.794751027s)
--- PASS: TestNoKubernetes/serial/Start (8.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-892646 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-892646 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.652807ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (3.466417666s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (4.053050783s)
--- PASS: TestNoKubernetes/serial/ProfileList (7.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-892646
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-892646: (1.22544564s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-892646 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-892646 --driver=docker  --container-runtime=crio: (7.489075627s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-892646 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-892646 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.320434ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (78.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1870873977 start -p stopped-upgrade-011121 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1870873977 start -p stopped-upgrade-011121 --memory=2200 --vm-driver=docker  --container-runtime=crio: (45.696730803s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1870873977 -p stopped-upgrade-011121 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1870873977 -p stopped-upgrade-011121 stop: (2.6965091s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-011121 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0417 19:56:04.993579  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-011121 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.681910714s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (78.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-011121
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-011121: (1.255381611s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestPause/serial/Start (78.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-069181 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0417 19:58:01.948230  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-069181 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m18.288919554s)
--- PASS: TestPause/serial/Start (78.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-069181 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0417 19:59:20.883426  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-069181 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.004593166s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.03s)

                                                
                                    
x
+
TestPause/serial/Pause (1.21s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-069181 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-069181 --alsologtostderr -v=5: (1.210991999s)
--- PASS: TestPause/serial/Pause (1.21s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-069181 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-069181 --output=json --layout=cluster: exit status 2 (509.332982ms)

                                                
                                                
-- stdout --
	{"Name":"pause-069181","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-069181","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.51s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-069181 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.93s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.21s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-069181 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-069181 --alsologtostderr -v=5: (1.214584178s)
--- PASS: TestPause/serial/PauseAgain (1.21s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-069181 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-069181 --alsologtostderr -v=5: (3.058722075s)
--- PASS: TestPause/serial/DeletePaused (3.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.7s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-069181
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-069181: exit status 1 (13.543835ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-069181: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-932737 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-932737 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (212.931565ms)

                                                
                                                
-- stdout --
	* [false-932737] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 20:00:38.892103  864858 out.go:291] Setting OutFile to fd 1 ...
	I0417 20:00:38.892306  864858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 20:00:38.892428  864858 out.go:304] Setting ErrFile to fd 2...
	I0417 20:00:38.892460  864858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 20:00:38.892755  864858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-688109/.minikube/bin
	I0417 20:00:38.893251  864858 out.go:298] Setting JSON to false
	I0417 20:00:38.894314  864858 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13386,"bootTime":1713370653,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0417 20:00:38.894421  864858 start.go:139] virtualization:  
	I0417 20:00:38.897436  864858 out.go:177] * [false-932737] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0417 20:00:38.900642  864858 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 20:00:38.902773  864858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 20:00:38.900788  864858 notify.go:220] Checking for updates...
	I0417 20:00:38.905021  864858 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-688109/kubeconfig
	I0417 20:00:38.907245  864858 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-688109/.minikube
	I0417 20:00:38.909004  864858 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0417 20:00:38.911265  864858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 20:00:38.913919  864858 config.go:182] Loaded profile config "kubernetes-upgrade-033757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 20:00:38.914038  864858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 20:00:38.934180  864858 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0417 20:00:38.934320  864858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0417 20:00:39.015930  864858 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-17 20:00:39.003694727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0417 20:00:39.016059  864858 docker.go:295] overlay module found
	I0417 20:00:39.018913  864858 out.go:177] * Using the docker driver based on user configuration
	I0417 20:00:39.021197  864858 start.go:297] selected driver: docker
	I0417 20:00:39.021226  864858 start.go:901] validating driver "docker" against <nil>
	I0417 20:00:39.021240  864858 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 20:00:39.024069  864858 out.go:177] 
	W0417 20:00:39.026222  864858 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0417 20:00:39.027959  864858 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-932737 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-932737" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-932737" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-932737

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-932737"

                                                
                                                
----------------------- debugLogs end: false-932737 [took: 5.163719714s] --------------------------------
helpers_test.go:175: Cleaning up "false-932737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-932737
--- PASS: TestNetworkPlugins/group/false (5.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (157.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-490901 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0417 20:02:23.927101  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
E0417 20:03:01.948121  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 20:04:20.883686  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-490901 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m37.388462364s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (157.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-490901 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [38e4d1b0-6fed-4fac-9e5a-fc563274ee32] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [38e4d1b0-6fed-4fac-9e5a-fc563274ee32] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.037844933s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-490901 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-514731 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-514731 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (1m3.016444742s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-490901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-490901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.678736163s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-490901 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-490901 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-490901 --alsologtostderr -v=3: (14.560868368s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-490901 -n old-k8s-version-490901
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-490901 -n old-k8s-version-490901: exit status 7 (84.642607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-490901 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (148.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-490901 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-490901 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m27.78565607s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-490901 -n old-k8s-version-490901
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (148.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-514731 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ba6a096-3212-41a5-828c-99624cf78e69] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ba6a096-3212-41a5-828c-99624cf78e69] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005096263s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-514731 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-514731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-514731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.275128588s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-514731 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-514731 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-514731 --alsologtostderr -v=3: (12.710591226s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-514731 -n no-preload-514731
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-514731 -n no-preload-514731: exit status 7 (77.242824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-514731 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-514731 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-514731 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (4m25.771290061s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-514731 -n no-preload-514731
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-q4dv5" [4d29f1c8-62bf-416b-a672-0756a25398d7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.009273314s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-q4dv5" [4d29f1c8-62bf-416b-a672-0756a25398d7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005418557s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-490901 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-490901 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-490901 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-490901 -n old-k8s-version-490901
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-490901 -n old-k8s-version-490901: exit status 2 (327.131042ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-490901 -n old-k8s-version-490901
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-490901 -n old-k8s-version-490901: exit status 2 (333.548564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-490901 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-490901 -n old-k8s-version-490901
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-490901 -n old-k8s-version-490901
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-314888 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
E0417 20:08:01.948502  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-314888 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (1m18.885937611s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-314888 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [125a167c-d04c-4bdf-beb3-d9a07370c8de] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [125a167c-d04c-4bdf-beb3-d9a07370c8de] Running
E0417 20:09:20.882986  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003968119s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-314888 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-314888 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-314888 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-314888 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-314888 --alsologtostderr -v=3: (12.023178756s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-314888 -n embed-certs-314888
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-314888 -n embed-certs-314888: exit status 7 (78.192727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-314888 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (274.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-314888 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
E0417 20:09:45.162537  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:09:45.168905  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:09:45.194459  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:09:45.214748  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:09:45.255060  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:09:45.335492  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:09:45.495765  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:09:45.816617  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:09:46.457547  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:09:47.737889  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:09:50.298541  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:09:55.419487  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:10:05.660544  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:10:26.140783  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-314888 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (4m33.910845144s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-314888 -n embed-certs-314888
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (274.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-xnnlw" [b647ff38-8cde-4e1e-bdd0-333f1635688e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003674914s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-xnnlw" [b647ff38-8cde-4e1e-bdd0-333f1635688e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004107652s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-514731 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-514731 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-514731 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-514731 -n no-preload-514731
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-514731 -n no-preload-514731: exit status 2 (312.285465ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-514731 -n no-preload-514731
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-514731 -n no-preload-514731: exit status 2 (322.167094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-514731 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-514731 -n no-preload-514731
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-514731 -n no-preload-514731
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-975968 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
E0417 20:11:07.101606  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-975968 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (50.970137822s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-975968 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2200ca44-3632-4ebe-baaf-c81acc7a0cd7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2200ca44-3632-4ebe-baaf-c81acc7a0cd7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004443485s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-975968 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-975968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-975968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003920356s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-975968 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-975968 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-975968 --alsologtostderr -v=3: (12.168124709s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-975968 -n default-k8s-diff-port-975968
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-975968 -n default-k8s-diff-port-975968: exit status 7 (82.696876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-975968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (295.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-975968 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
E0417 20:12:29.021841  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:12:44.994733  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 20:13:01.948546  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-975968 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (4m55.391899367s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-975968 -n default-k8s-diff-port-975968
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (295.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-97blt" [6b3810c5-2d4a-433f-8ccd-166cf242e9cc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003345274s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-97blt" [6b3810c5-2d4a-433f-8ccd-166cf242e9cc] Running
E0417 20:14:20.883874  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003899666s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-314888 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-314888 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-314888 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-314888 -n embed-certs-314888
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-314888 -n embed-certs-314888: exit status 2 (319.248397ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-314888 -n embed-certs-314888
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-314888 -n embed-certs-314888: exit status 2 (364.444974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-314888 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-314888 -n embed-certs-314888
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-314888 -n embed-certs-314888
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-564364 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
E0417 20:14:45.162057  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
E0417 20:15:12.862998  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-564364 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (47.203303077s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-564364 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-564364 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.024905017s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-564364 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-564364 --alsologtostderr -v=3: (1.28057379s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-564364 -n newest-cni-564364
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-564364 -n newest-cni-564364: exit status 7 (73.573547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-564364 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-564364 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-564364 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (16.41338819s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-564364 -n newest-cni-564364
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-564364 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-564364 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-564364 -n newest-cni-564364
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-564364 -n newest-cni-564364: exit status 2 (338.290319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-564364 -n newest-cni-564364
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-564364 -n newest-cni-564364: exit status 2 (372.879171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-564364 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-564364 -n newest-cni-564364
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-564364 -n newest-cni-564364
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0417 20:15:51.141898  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:15:51.147153  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:15:51.157479  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:15:51.177747  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:15:51.218756  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:15:51.299359  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:15:51.459593  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:15:51.780086  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:15:52.421229  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:15:53.702173  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:15:56.262748  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:16:01.383175  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:16:11.623991  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
E0417 20:16:32.105127  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m18.570963036s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-932737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-932737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5b6r8" [cc4ae014-9b9d-487f-b153-98f4f6c20823] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5b6r8" [cc4ae014-9b9d-487f-b153-98f4f6c20823] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004026906s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5ksbt" [4dc8d193-43ba-4bff-b461-a9c91fd1a9f2] Running
E0417 20:17:13.065384  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003950935s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-932737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5ksbt" [4dc8d193-43ba-4bff-b461-a9c91fd1a9f2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004452654s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-975968 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-975968 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-975968 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-975968 --alsologtostderr -v=1: (1.207370599s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-975968 -n default-k8s-diff-port-975968
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-975968 -n default-k8s-diff-port-975968: exit status 2 (457.922223ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-975968 -n default-k8s-diff-port-975968
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-975968 -n default-k8s-diff-port-975968: exit status 2 (427.711829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-975968 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-975968 --alsologtostderr -v=1: (1.050148117s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-975968 -n default-k8s-diff-port-975968
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-975968 -n default-k8s-diff-port-975968
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.39s)
E0417 20:23:25.259735  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.754833673s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0417 20:18:01.948479  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
E0417 20:18:34.986499  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m17.762210337s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6rwd8" [8cd7e515-1484-48d9-be0d-b0438777befa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.008702828s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-cl7cf" [83c5801a-6458-4b0c-ba49-3156ae2ab6a9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005980438s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-932737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-932737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vp556" [d74b8b2c-8045-43f5-a96e-2bdb0097293e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vp556" [d74b8b2c-8045-43f5-a96e-2bdb0097293e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00343183s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-932737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-932737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-68ccs" [5f7c69f9-eba2-44b5-9496-cf459bae151c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0417 20:19:03.928067  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/functional-532156/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-68ccs" [5f7c69f9-eba2-44b5-9496-cf459bae151c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004774525s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-932737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-932737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m12.067832876s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (96.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0417 20:19:45.162824  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/old-k8s-version-490901/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m36.164608603s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (96.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-932737 "pgrep -a kubelet"
E0417 20:20:51.141426  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-932737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5fkqp" [0c6726c9-0b5c-488c-974e-349a520a1bb5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5fkqp" [0c6726c9-0b5c-488c-974e-349a520a1bb5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004078681s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-932737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-932737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-932737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-87l6r" [7625229d-89f3-4222-8105-b9b5820f0342] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0417 20:21:18.827027  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/no-preload-514731/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-87l6r" [7625229d-89f3-4222-8105-b9b5820f0342] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.003903828s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m10.295930567s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-932737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0417 20:21:59.891706  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/default-k8s-diff-port-975968/client.crt: no such file or directory
E0417 20:22:03.337444  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:03.342689  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:03.353049  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:03.373355  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:03.413697  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:03.494058  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:03.654973  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:03.975317  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:04.616054  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:05.896953  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:08.457082  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:10.132558  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/default-k8s-diff-port-975968/client.crt: no such file or directory
E0417 20:22:13.577863  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:23.818591  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
E0417 20:22:30.613099  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/default-k8s-diff-port-975968/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-932737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m3.769778961s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-z54ss" [9282cf2e-61f1-4425-9344-d731ba48a7b0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004618423s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-932737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-932737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rdqcx" [37dd1b6e-4f59-4b6e-a349-a164209ec115] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0417 20:22:44.298933  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/auto-932737/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-rdqcx" [37dd1b6e-4f59-4b6e-a349-a164209ec115] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004258152s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-932737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-932737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-932737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xtcn5" [41080eac-1435-4c87-95f6-3756eb530c44] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0417 20:23:01.948498  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/addons-873604/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-xtcn5" [41080eac-1435-4c87-95f6-3756eb530c44] Running
E0417 20:23:11.573616  693518 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/default-k8s-diff-port-975968/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004904251s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-932737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-932737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    

Test skip (29/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-474356 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-474356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-474356
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-878588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-878588
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-932737 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-932737" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-932737" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18665-688109/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Apr 2024 20:00:07 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-033757
contexts:
- context:
cluster: kubernetes-upgrade-033757
extensions:
- extension:
last-update: Wed, 17 Apr 2024 20:00:07 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: kubernetes-upgrade-033757
name: kubernetes-upgrade-033757
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-033757
user:
client-certificate: /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/kubernetes-upgrade-033757/client.crt
client-key: /home/jenkins/minikube-integration/18665-688109/.minikube/profiles/kubernetes-upgrade-033757/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-932737

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-932737"

                                                
                                                
----------------------- debugLogs end: kubenet-932737 [took: 4.855345738s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-932737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-932737
--- SKIP: TestNetworkPlugins/group/kubenet (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-932737 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-932737" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-932737

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-932737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-932737"

                                                
                                                
----------------------- debugLogs end: cilium-932737 [took: 6.154759519s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-932737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-932737
--- SKIP: TestNetworkPlugins/group/cilium (6.34s)

                                                
                                    
Copied to clipboard