Test Report: Docker_Linux_crio_arm64 19461

                    
                      ee4f5fb2e73abafca70b3598ab7977372efc25a8:2024-08-16:35814
                    
                

Test fail (2/328)

Order failed test Duration
34 TestAddons/parallel/Ingress 153.49
36 TestAddons/parallel/MetricsServer 325.61
x
+
TestAddons/parallel/Ingress (153.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-035693 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-035693 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-035693 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9a86ad02-4c45-49c4-97f7-fc426a6cff4e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9a86ad02-4c45-49c4-97f7-fc426a6cff4e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002950669s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-035693 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.467760784s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-035693 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-035693 addons disable ingress-dns --alsologtostderr -v=1: (1.466093058s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-035693 addons disable ingress --alsologtostderr -v=1: (7.757548643s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-035693
helpers_test.go:235: (dbg) docker inspect addons-035693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8",
	        "Created": "2024-08-16T17:49:37.189676917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285535,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-16T17:49:37.337084512Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8/hostname",
	        "HostsPath": "/var/lib/docker/containers/476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8/hosts",
	        "LogPath": "/var/lib/docker/containers/476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8/476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8-json.log",
	        "Name": "/addons-035693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-035693:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-035693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/947dbc3b107a5acd33a6b006e02949c9b2452bfe48994eeae84559323de14ce1-init/diff:/var/lib/docker/overlay2/70037d522e00dd0a89a9843a2c58153706242dc665eddca7b5915c2487a67ddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/947dbc3b107a5acd33a6b006e02949c9b2452bfe48994eeae84559323de14ce1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/947dbc3b107a5acd33a6b006e02949c9b2452bfe48994eeae84559323de14ce1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/947dbc3b107a5acd33a6b006e02949c9b2452bfe48994eeae84559323de14ce1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-035693",
	                "Source": "/var/lib/docker/volumes/addons-035693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-035693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-035693",
	                "name.minikube.sigs.k8s.io": "addons-035693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "92e865572677ca1bbcd69c32a59062040b6c58e3627396f20143012c7bea7194",
	            "SandboxKey": "/var/run/docker/netns/92e865572677",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-035693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8b521fbf1af19d1dfad15c433f7e1bd8503f271e638f4b949c047a9c0b659da8",
	                    "EndpointID": "fdf53fe0696062dae63d98a57ebd77b53282244e6dbe57c6383a7ded0bc64d06",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-035693",
	                        "476c3ad0d88c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-035693 -n addons-035693
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-035693 logs -n 25: (1.364393882s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-628669                                                                     | download-only-628669   | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC | 16 Aug 24 17:49 UTC |
	| start   | --download-only -p                                                                          | download-docker-240993 | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC |                     |
	|         | download-docker-240993                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-240993                                                                   | download-docker-240993 | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC | 16 Aug 24 17:49 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-205704   | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC |                     |
	|         | binary-mirror-205704                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41837                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-205704                                                                     | binary-mirror-205704   | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC | 16 Aug 24 17:49 UTC |
	| addons  | disable dashboard -p                                                                        | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC |                     |
	|         | addons-035693                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC |                     |
	|         | addons-035693                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-035693 --wait=true                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC | 16 Aug 24 17:52 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:52 UTC | 16 Aug 24 17:53 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-035693 ip                                                                            | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | -p addons-035693                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-035693 ssh cat                                                                       | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | /opt/local-path-provisioner/pvc-8f665f0d-7f70-4b2f-b5f6-7d515479e3bb_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | addons-035693                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | -p addons-035693                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-035693 addons                                                                        | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-035693 addons                                                                        | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:54 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:54 UTC | 16 Aug 24 17:54 UTC |
	|         | addons-035693                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-035693 ssh curl -s                                                                   | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:54 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-035693 ip                                                                            | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:56 UTC | 16 Aug 24 17:56 UTC |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:56 UTC | 16 Aug 24 17:56 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:56 UTC | 16 Aug 24 17:56 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:49:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:49:12.491073  285045 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:49:12.491277  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:49:12.491304  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:49:12.491325  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:49:12.491607  285045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
	I0816 17:49:12.492129  285045 out.go:352] Setting JSON to false
	I0816 17:49:12.493053  285045 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5501,"bootTime":1723825052,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 17:49:12.493162  285045 start.go:139] virtualization:  
	I0816 17:49:12.495600  285045 out.go:177] * [addons-035693] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 17:49:12.497898  285045 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:49:12.498025  285045 notify.go:220] Checking for updates...
	I0816 17:49:12.501298  285045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:49:12.503133  285045 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	I0816 17:49:12.504749  285045 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	I0816 17:49:12.506579  285045 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 17:49:12.508261  285045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:49:12.510104  285045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:49:12.533541  285045 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 17:49:12.533661  285045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:49:12.597131  285045 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 17:49:12.587883065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:49:12.597250  285045 docker.go:307] overlay module found
	I0816 17:49:12.600472  285045 out.go:177] * Using the docker driver based on user configuration
	I0816 17:49:12.602096  285045 start.go:297] selected driver: docker
	I0816 17:49:12.602113  285045 start.go:901] validating driver "docker" against <nil>
	I0816 17:49:12.602127  285045 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:49:12.602776  285045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:49:12.661686  285045 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 17:49:12.650713657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:49:12.661883  285045 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 17:49:12.662127  285045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:49:12.664148  285045 out.go:177] * Using Docker driver with root privileges
	I0816 17:49:12.666027  285045 cni.go:84] Creating CNI manager for ""
	I0816 17:49:12.666058  285045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 17:49:12.666076  285045 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 17:49:12.666204  285045 start.go:340] cluster config:
	{Name:addons-035693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-035693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:49:12.668308  285045 out.go:177] * Starting "addons-035693" primary control-plane node in "addons-035693" cluster
	I0816 17:49:12.669958  285045 cache.go:121] Beginning downloading kic base image for docker with crio
	I0816 17:49:12.671810  285045 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0816 17:49:12.673468  285045 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:49:12.673519  285045 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-278896/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0816 17:49:12.673536  285045 cache.go:56] Caching tarball of preloaded images
	I0816 17:49:12.673567  285045 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0816 17:49:12.673623  285045 preload.go:172] Found /home/jenkins/minikube-integration/19461-278896/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0816 17:49:12.673633  285045 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:49:12.673997  285045 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/config.json ...
	I0816 17:49:12.674030  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/config.json: {Name:mkbb27cbeedd58dd6672b815036d375a27c5cf00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:12.688632  285045 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0816 17:49:12.688756  285045 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0816 17:49:12.688783  285045 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0816 17:49:12.688789  285045 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0816 17:49:12.688796  285045 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0816 17:49:12.688807  285045 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0816 17:49:29.783534  285045 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0816 17:49:29.783577  285045 cache.go:194] Successfully downloaded all kic artifacts
	I0816 17:49:29.783616  285045 start.go:360] acquireMachinesLock for addons-035693: {Name:mk10c159bb3bc4a2c181acf77d64f0fe4d1d4dec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:49:29.783753  285045 start.go:364] duration metric: took 110.227µs to acquireMachinesLock for "addons-035693"
	I0816 17:49:29.783790  285045 start.go:93] Provisioning new machine with config: &{Name:addons-035693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-035693 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:49:29.783869  285045 start.go:125] createHost starting for "" (driver="docker")
	I0816 17:49:29.786258  285045 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0816 17:49:29.786514  285045 start.go:159] libmachine.API.Create for "addons-035693" (driver="docker")
	I0816 17:49:29.786556  285045 client.go:168] LocalClient.Create starting
	I0816 17:49:29.786673  285045 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem
	I0816 17:49:30.086025  285045 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/cert.pem
	I0816 17:49:30.701732  285045 cli_runner.go:164] Run: docker network inspect addons-035693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 17:49:30.717106  285045 cli_runner.go:211] docker network inspect addons-035693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 17:49:30.717189  285045 network_create.go:284] running [docker network inspect addons-035693] to gather additional debugging logs...
	I0816 17:49:30.717210  285045 cli_runner.go:164] Run: docker network inspect addons-035693
	W0816 17:49:30.732203  285045 cli_runner.go:211] docker network inspect addons-035693 returned with exit code 1
	I0816 17:49:30.732236  285045 network_create.go:287] error running [docker network inspect addons-035693]: docker network inspect addons-035693: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-035693 not found
	I0816 17:49:30.732250  285045 network_create.go:289] output of [docker network inspect addons-035693]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-035693 not found
	
	** /stderr **
	I0816 17:49:30.732353  285045 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 17:49:30.747845  285045 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400177be90}
	I0816 17:49:30.747888  285045 network_create.go:124] attempt to create docker network addons-035693 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0816 17:49:30.747955  285045 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-035693 addons-035693
	I0816 17:49:30.815066  285045 network_create.go:108] docker network addons-035693 192.168.49.0/24 created
	I0816 17:49:30.815110  285045 kic.go:121] calculated static IP "192.168.49.2" for the "addons-035693" container
	I0816 17:49:30.815186  285045 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0816 17:49:30.829817  285045 cli_runner.go:164] Run: docker volume create addons-035693 --label name.minikube.sigs.k8s.io=addons-035693 --label created_by.minikube.sigs.k8s.io=true
	I0816 17:49:30.846456  285045 oci.go:103] Successfully created a docker volume addons-035693
	I0816 17:49:30.846552  285045 cli_runner.go:164] Run: docker run --rm --name addons-035693-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-035693 --entrypoint /usr/bin/test -v addons-035693:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0816 17:49:32.942429  285045 cli_runner.go:217] Completed: docker run --rm --name addons-035693-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-035693 --entrypoint /usr/bin/test -v addons-035693:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (2.095830883s)
	I0816 17:49:32.942463  285045 oci.go:107] Successfully prepared a docker volume addons-035693
	I0816 17:49:32.942494  285045 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:49:32.942515  285045 kic.go:194] Starting extracting preloaded images to volume ...
	I0816 17:49:32.942594  285045 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19461-278896/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-035693:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 17:49:37.120912  285045 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19461-278896/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-035693:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.178279408s)
	I0816 17:49:37.120958  285045 kic.go:203] duration metric: took 4.178427772s to extract preloaded images to volume ...
	W0816 17:49:37.121092  285045 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0816 17:49:37.121201  285045 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 17:49:37.175152  285045 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-035693 --name addons-035693 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-035693 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-035693 --network addons-035693 --ip 192.168.49.2 --volume addons-035693:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0816 17:49:37.486971  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Running}}
	I0816 17:49:37.507017  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:49:37.529165  285045 cli_runner.go:164] Run: docker exec addons-035693 stat /var/lib/dpkg/alternatives/iptables
	I0816 17:49:37.597985  285045 oci.go:144] the created container "addons-035693" has a running status.
	I0816 17:49:37.598017  285045 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa...
	I0816 17:49:38.349462  285045 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 17:49:38.383555  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:49:38.402620  285045 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 17:49:38.402640  285045 kic_runner.go:114] Args: [docker exec --privileged addons-035693 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 17:49:38.469526  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:49:38.486282  285045 machine.go:93] provisionDockerMachine start ...
	I0816 17:49:38.486371  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:38.504535  285045 main.go:141] libmachine: Using SSH client type: native
	I0816 17:49:38.504839  285045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0816 17:49:38.504857  285045 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 17:49:38.647906  285045 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-035693
	
	I0816 17:49:38.647972  285045 ubuntu.go:169] provisioning hostname "addons-035693"
	I0816 17:49:38.648045  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:38.666123  285045 main.go:141] libmachine: Using SSH client type: native
	I0816 17:49:38.666382  285045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0816 17:49:38.666401  285045 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-035693 && echo "addons-035693" | sudo tee /etc/hostname
	I0816 17:49:38.808187  285045 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-035693
	
	I0816 17:49:38.808291  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:38.825627  285045 main.go:141] libmachine: Using SSH client type: native
	I0816 17:49:38.825867  285045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0816 17:49:38.825890  285045 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-035693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-035693/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-035693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:49:38.960899  285045 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:49:38.960928  285045 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19461-278896/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-278896/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-278896/.minikube}
	I0816 17:49:38.960957  285045 ubuntu.go:177] setting up certificates
	I0816 17:49:38.960966  285045 provision.go:84] configureAuth start
	I0816 17:49:38.961034  285045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-035693
	I0816 17:49:38.977611  285045 provision.go:143] copyHostCerts
	I0816 17:49:38.977695  285045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-278896/.minikube/ca.pem (1082 bytes)
	I0816 17:49:38.977826  285045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-278896/.minikube/cert.pem (1123 bytes)
	I0816 17:49:38.977887  285045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-278896/.minikube/key.pem (1679 bytes)
	I0816 17:49:38.977939  285045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-278896/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca-key.pem org=jenkins.addons-035693 san=[127.0.0.1 192.168.49.2 addons-035693 localhost minikube]
	I0816 17:49:39.140644  285045 provision.go:177] copyRemoteCerts
	I0816 17:49:39.140722  285045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:49:39.140769  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.157356  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:49:39.249753  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:49:39.275376  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:49:39.299604  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 17:49:39.323771  285045 provision.go:87] duration metric: took 362.782383ms to configureAuth
	I0816 17:49:39.323802  285045 ubuntu.go:193] setting minikube options for container-runtime
	I0816 17:49:39.323992  285045 config.go:182] Loaded profile config "addons-035693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:49:39.324105  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.340782  285045 main.go:141] libmachine: Using SSH client type: native
	I0816 17:49:39.341020  285045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0816 17:49:39.341039  285045 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:49:39.576337  285045 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:49:39.576361  285045 machine.go:96] duration metric: took 1.090058129s to provisionDockerMachine
	I0816 17:49:39.576372  285045 client.go:171] duration metric: took 9.789804776s to LocalClient.Create
	I0816 17:49:39.576387  285045 start.go:167] duration metric: took 9.789873149s to libmachine.API.Create "addons-035693"
	I0816 17:49:39.576404  285045 start.go:293] postStartSetup for "addons-035693" (driver="docker")
	I0816 17:49:39.576414  285045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:49:39.576477  285045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:49:39.576517  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.595717  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:49:39.689843  285045 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:49:39.692978  285045 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 17:49:39.693014  285045 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 17:49:39.693026  285045 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 17:49:39.693033  285045 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0816 17:49:39.693044  285045 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-278896/.minikube/addons for local assets ...
	I0816 17:49:39.693118  285045 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-278896/.minikube/files for local assets ...
	I0816 17:49:39.693149  285045 start.go:296] duration metric: took 116.739657ms for postStartSetup
	I0816 17:49:39.693484  285045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-035693
	I0816 17:49:39.709424  285045 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/config.json ...
	I0816 17:49:39.709708  285045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:49:39.709767  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.726164  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:49:39.817366  285045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0816 17:49:39.821749  285045 start.go:128] duration metric: took 10.037862917s to createHost
	I0816 17:49:39.821775  285045 start.go:83] releasing machines lock for "addons-035693", held for 10.038008836s
	I0816 17:49:39.821855  285045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-035693
	I0816 17:49:39.837362  285045 ssh_runner.go:195] Run: cat /version.json
	I0816 17:49:39.837413  285045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:49:39.837426  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.837488  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.860745  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:49:39.863482  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:49:40.118456  285045 ssh_runner.go:195] Run: systemctl --version
	I0816 17:49:40.123336  285045 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:49:40.268896  285045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 17:49:40.273440  285045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:49:40.295465  285045 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0816 17:49:40.295550  285045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:49:40.333676  285045 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0816 17:49:40.333703  285045 start.go:495] detecting cgroup driver to use...
	I0816 17:49:40.333739  285045 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0816 17:49:40.333792  285045 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:49:40.352299  285045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:49:40.365554  285045 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:49:40.365707  285045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:49:40.382014  285045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:49:40.397221  285045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:49:40.480365  285045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:49:40.585807  285045 docker.go:233] disabling docker service ...
	I0816 17:49:40.585884  285045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:49:40.606451  285045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:49:40.618549  285045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:49:40.715159  285045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:49:40.812318  285045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:49:40.824829  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:49:40.841260  285045 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:49:40.841330  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.851380  285045 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:49:40.851504  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.861311  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.871201  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.881521  285045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:49:40.891015  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.900769  285045 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.916899  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.926850  285045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:49:40.935824  285045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:49:40.944250  285045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:49:41.022918  285045 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:49:41.135330  285045 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:49:41.135412  285045 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:49:41.138964  285045 start.go:563] Will wait 60s for crictl version
	I0816 17:49:41.139074  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:49:41.142934  285045 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:49:41.181669  285045 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0816 17:49:41.181795  285045 ssh_runner.go:195] Run: crio --version
	I0816 17:49:41.228599  285045 ssh_runner.go:195] Run: crio --version
	I0816 17:49:41.269844  285045 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0816 17:49:41.272021  285045 cli_runner.go:164] Run: docker network inspect addons-035693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 17:49:41.289933  285045 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0816 17:49:41.293620  285045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:49:41.304816  285045 kubeadm.go:883] updating cluster {Name:addons-035693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-035693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 17:49:41.304943  285045 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:49:41.305024  285045 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:49:41.383622  285045 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:49:41.383646  285045 crio.go:433] Images already preloaded, skipping extraction
	I0816 17:49:41.383700  285045 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:49:41.424870  285045 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:49:41.424931  285045 cache_images.go:84] Images are preloaded, skipping loading
	I0816 17:49:41.424941  285045 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0816 17:49:41.425043  285045 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-035693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-035693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:49:41.425123  285045 ssh_runner.go:195] Run: crio config
	I0816 17:49:41.477162  285045 cni.go:84] Creating CNI manager for ""
	I0816 17:49:41.477188  285045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 17:49:41.477205  285045 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 17:49:41.477229  285045 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-035693 NodeName:addons-035693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 17:49:41.477382  285045 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-035693"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 17:49:41.477468  285045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:49:41.486473  285045 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 17:49:41.486546  285045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 17:49:41.495546  285045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0816 17:49:41.513750  285045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:49:41.531574  285045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0816 17:49:41.550204  285045 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 17:49:41.553797  285045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:49:41.564907  285045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:49:41.657386  285045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:49:41.671102  285045 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693 for IP: 192.168.49.2
	I0816 17:49:41.671134  285045 certs.go:194] generating shared ca certs ...
	I0816 17:49:41.671152  285045 certs.go:226] acquiring lock for ca certs: {Name:mk5387cb6cbb5a544c3c082f10b573950a035d73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:41.671320  285045 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-278896/.minikube/ca.key
	I0816 17:49:41.980959  285045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-278896/.minikube/ca.crt ...
	I0816 17:49:41.980993  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/ca.crt: {Name:mk5214fbcb931ec9c573571ab7e2e949722d8301 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:41.981618  285045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-278896/.minikube/ca.key ...
	I0816 17:49:41.981634  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/ca.key: {Name:mk4aa1aedafc855a8e7dc18ed3510793dd3a613d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:41.981726  285045 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.key
	I0816 17:49:42.323782  285045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.crt ...
	I0816 17:49:42.323817  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.crt: {Name:mk75128e47dd037679fc405e656c58508593e4dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:42.324657  285045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.key ...
	I0816 17:49:42.324689  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.key: {Name:mk3f6ccdc85d2dc2c1b5d768e569cd2508b4985f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:42.324822  285045 certs.go:256] generating profile certs ...
	I0816 17:49:42.324901  285045 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.key
	I0816 17:49:42.325000  285045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt with IP's: []
	I0816 17:49:42.977182  285045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt ...
	I0816 17:49:42.977216  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: {Name:mk1bda83cfdab2272cfc81be06128d85dee1c240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:42.978073  285045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.key ...
	I0816 17:49:42.978099  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.key: {Name:mk987eaaed9a9a3f7e4e5bdd35ab8ad4be3481b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:42.978551  285045 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key.302e7c68
	I0816 17:49:42.978575  285045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt.302e7c68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0816 17:49:43.170392  285045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt.302e7c68 ...
	I0816 17:49:43.170422  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt.302e7c68: {Name:mk3c2e80871adf23f9ae2045573df0a3468378e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:43.170608  285045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key.302e7c68 ...
	I0816 17:49:43.170625  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key.302e7c68: {Name:mk55b7900f4990ee92b1785096a1a3a93adad724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:43.170717  285045 certs.go:381] copying /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt.302e7c68 -> /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt
	I0816 17:49:43.170796  285045 certs.go:385] copying /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key.302e7c68 -> /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key
	I0816 17:49:43.170854  285045 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.key
	I0816 17:49:43.170876  285045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.crt with IP's: []
	I0816 17:49:43.452907  285045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.crt ...
	I0816 17:49:43.452943  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.crt: {Name:mkfce6ad7090274346daaec3a6dbf34f56a24604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:43.453130  285045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.key ...
	I0816 17:49:43.453144  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.key: {Name:mka0fbb455b5925750b76784800dbe67f2c8762f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:43.453346  285045 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 17:49:43.453389  285045 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:49:43.453424  285045 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:49:43.453454  285045 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/key.pem (1679 bytes)
	I0816 17:49:43.454131  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:49:43.479895  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 17:49:43.503746  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:49:43.527799  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:49:43.551976  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0816 17:49:43.576536  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 17:49:43.601177  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:49:43.625723  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 17:49:43.649499  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:49:43.674001  285045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 17:49:43.692435  285045 ssh_runner.go:195] Run: openssl version
	I0816 17:49:43.697904  285045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:49:43.707617  285045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:49:43.711110  285045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 17:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:49:43.711182  285045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:49:43.718133  285045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:49:43.727616  285045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:49:43.731067  285045 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 17:49:43.731121  285045 kubeadm.go:392] StartCluster: {Name:addons-035693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-035693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:49:43.731202  285045 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 17:49:43.731266  285045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 17:49:43.770083  285045 cri.go:89] found id: ""
	I0816 17:49:43.770178  285045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 17:49:43.779109  285045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 17:49:43.788134  285045 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0816 17:49:43.788223  285045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 17:49:43.797002  285045 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 17:49:43.797022  285045 kubeadm.go:157] found existing configuration files:
	
	I0816 17:49:43.797076  285045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 17:49:43.805922  285045 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 17:49:43.805991  285045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 17:49:43.814545  285045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 17:49:43.823138  285045 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 17:49:43.823203  285045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 17:49:43.831805  285045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 17:49:43.840896  285045 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 17:49:43.840990  285045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 17:49:43.849586  285045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 17:49:43.858947  285045 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 17:49:43.859043  285045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 17:49:43.867419  285045 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 17:49:43.916259  285045 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 17:49:43.916494  285045 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 17:49:43.961968  285045 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0816 17:49:43.962047  285045 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0816 17:49:43.962099  285045 kubeadm.go:310] OS: Linux
	I0816 17:49:43.962149  285045 kubeadm.go:310] CGROUPS_CPU: enabled
	I0816 17:49:43.962203  285045 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0816 17:49:43.962255  285045 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0816 17:49:43.962322  285045 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0816 17:49:43.962373  285045 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0816 17:49:43.962423  285045 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0816 17:49:43.962470  285045 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0816 17:49:43.962518  285045 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0816 17:49:43.962567  285045 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0816 17:49:44.031861  285045 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 17:49:44.031985  285045 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 17:49:44.032111  285045 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 17:49:44.039195  285045 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 17:49:44.043016  285045 out.go:235]   - Generating certificates and keys ...
	I0816 17:49:44.043239  285045 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 17:49:44.043346  285045 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 17:49:44.286736  285045 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 17:49:44.511056  285045 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 17:49:44.772081  285045 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 17:49:46.085635  285045 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 17:49:46.523458  285045 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 17:49:46.523713  285045 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-035693 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 17:49:47.056190  285045 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 17:49:47.056475  285045 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-035693 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 17:49:47.590822  285045 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 17:49:47.752715  285045 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 17:49:48.014187  285045 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 17:49:48.014264  285045 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 17:49:48.437238  285045 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 17:49:48.779088  285045 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 17:49:49.212979  285045 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 17:49:49.391114  285045 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 17:49:50.256074  285045 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 17:49:50.257834  285045 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 17:49:50.259990  285045 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 17:49:50.262242  285045 out.go:235]   - Booting up control plane ...
	I0816 17:49:50.262348  285045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 17:49:50.262426  285045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 17:49:50.263303  285045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 17:49:50.273483  285045 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 17:49:50.279178  285045 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 17:49:50.279510  285045 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 17:49:50.372209  285045 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 17:49:50.372332  285045 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 17:49:51.373800  285045 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001684122s
	I0816 17:49:51.373898  285045 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 17:49:57.877257  285045 kubeadm.go:310] [api-check] The API server is healthy after 6.501311028s
	I0816 17:49:57.895654  285045 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 17:49:57.912930  285045 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 17:49:57.938925  285045 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 17:49:57.939117  285045 kubeadm.go:310] [mark-control-plane] Marking the node addons-035693 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 17:49:57.950829  285045 kubeadm.go:310] [bootstrap-token] Using token: dfzgpf.u099ubqf8oq9r2ar
	I0816 17:49:57.952770  285045 out.go:235]   - Configuring RBAC rules ...
	I0816 17:49:57.952894  285045 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 17:49:57.958001  285045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 17:49:57.966571  285045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 17:49:57.971924  285045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 17:49:57.975967  285045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 17:49:57.979880  285045 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 17:49:58.281497  285045 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 17:49:58.720885  285045 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 17:49:59.281869  285045 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 17:49:59.282985  285045 kubeadm.go:310] 
	I0816 17:49:59.283070  285045 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 17:49:59.283083  285045 kubeadm.go:310] 
	I0816 17:49:59.283159  285045 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 17:49:59.283168  285045 kubeadm.go:310] 
	I0816 17:49:59.283193  285045 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 17:49:59.283253  285045 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 17:49:59.283305  285045 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 17:49:59.283314  285045 kubeadm.go:310] 
	I0816 17:49:59.283366  285045 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 17:49:59.283374  285045 kubeadm.go:310] 
	I0816 17:49:59.283420  285045 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 17:49:59.283429  285045 kubeadm.go:310] 
	I0816 17:49:59.283480  285045 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 17:49:59.283555  285045 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 17:49:59.283626  285045 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 17:49:59.283634  285045 kubeadm.go:310] 
	I0816 17:49:59.283716  285045 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 17:49:59.283800  285045 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 17:49:59.283809  285045 kubeadm.go:310] 
	I0816 17:49:59.283890  285045 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dfzgpf.u099ubqf8oq9r2ar \
	I0816 17:49:59.283992  285045 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:522d2e9084bbcb6112ba1fb935ecdfcda75cfb6d9f17126bcf73feb6609fe7d4 \
	I0816 17:49:59.284016  285045 kubeadm.go:310] 	--control-plane 
	I0816 17:49:59.284023  285045 kubeadm.go:310] 
	I0816 17:49:59.284105  285045 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 17:49:59.284113  285045 kubeadm.go:310] 
	I0816 17:49:59.284192  285045 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dfzgpf.u099ubqf8oq9r2ar \
	I0816 17:49:59.284294  285045 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:522d2e9084bbcb6112ba1fb935ecdfcda75cfb6d9f17126bcf73feb6609fe7d4 
	I0816 17:49:59.288651  285045 kubeadm.go:310] W0816 17:49:43.912416    1174 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 17:49:59.288952  285045 kubeadm.go:310] W0816 17:49:43.913746    1174 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 17:49:59.289162  285045 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0816 17:49:59.289269  285045 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 17:49:59.289291  285045 cni.go:84] Creating CNI manager for ""
	I0816 17:49:59.289300  285045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 17:49:59.291494  285045 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 17:49:59.293474  285045 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 17:49:59.297743  285045 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 17:49:59.297766  285045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 17:49:59.316991  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 17:49:59.621246  285045 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 17:49:59.621413  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:49:59.621419  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-035693 minikube.k8s.io/updated_at=2024_08_16T17_49_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=addons-035693 minikube.k8s.io/primary=true
	I0816 17:49:59.799087  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:49:59.799135  285045 ops.go:34] apiserver oom_adj: -16
	I0816 17:50:00.326410  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:00.800126  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:01.299698  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:01.799218  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:02.299219  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:02.799694  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:03.299981  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:03.799549  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:03.885997  285045 kubeadm.go:1113] duration metric: took 4.264647393s to wait for elevateKubeSystemPrivileges
	I0816 17:50:03.886042  285045 kubeadm.go:394] duration metric: took 20.154925999s to StartCluster
	I0816 17:50:03.886061  285045 settings.go:142] acquiring lock: {Name:mk45720424438a5d93f082d2cc69f502b3ed6f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:50:03.886975  285045 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-278896/kubeconfig
	I0816 17:50:03.887455  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/kubeconfig: {Name:mk0b74dabbab2b27fb455b2cd76965b27d9abfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:50:03.888054  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 17:50:03.888094  285045 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:50:03.888478  285045 config.go:182] Loaded profile config "addons-035693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:50:03.888500  285045 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0816 17:50:03.888625  285045 addons.go:69] Setting ingress-dns=true in profile "addons-035693"
	I0816 17:50:03.888624  285045 addons.go:69] Setting yakd=true in profile "addons-035693"
	I0816 17:50:03.888673  285045 addons.go:234] Setting addon yakd=true in "addons-035693"
	I0816 17:50:03.888703  285045 addons.go:234] Setting addon ingress-dns=true in "addons-035693"
	I0816 17:50:03.888720  285045 addons.go:69] Setting metrics-server=true in profile "addons-035693"
	I0816 17:50:03.888738  285045 addons.go:234] Setting addon metrics-server=true in "addons-035693"
	I0816 17:50:03.888755  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.888807  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.889277  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.889286  285045 addons.go:69] Setting cloud-spanner=true in profile "addons-035693"
	I0816 17:50:03.889350  285045 addons.go:234] Setting addon cloud-spanner=true in "addons-035693"
	I0816 17:50:03.889399  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.889790  285045 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-035693"
	I0816 17:50:03.889820  285045 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-035693"
	I0816 17:50:03.889853  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.889870  285045 addons.go:69] Setting storage-provisioner=true in profile "addons-035693"
	I0816 17:50:03.889909  285045 addons.go:234] Setting addon storage-provisioner=true in "addons-035693"
	I0816 17:50:03.889929  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.890355  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.892642  285045 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-035693"
	I0816 17:50:03.892693  285045 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-035693"
	I0816 17:50:03.893020  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.893198  285045 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-035693"
	I0816 17:50:03.893280  285045 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-035693"
	I0816 17:50:03.893330  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.893733  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.889277  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.889854  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.909292  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.893846  285045 addons.go:69] Setting default-storageclass=true in profile "addons-035693"
	I0816 17:50:03.888712  285045 addons.go:69] Setting inspektor-gadget=true in profile "addons-035693"
	I0816 17:50:03.920735  285045 addons.go:234] Setting addon inspektor-gadget=true in "addons-035693"
	I0816 17:50:03.888705  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.889862  285045 addons.go:69] Setting registry=true in profile "addons-035693"
	I0816 17:50:03.920876  285045 addons.go:234] Setting addon registry=true in "addons-035693"
	I0816 17:50:03.893858  285045 addons.go:69] Setting gcp-auth=true in profile "addons-035693"
	I0816 17:50:03.920961  285045 mustload.go:65] Loading cluster: addons-035693
	I0816 17:50:03.893870  285045 addons.go:69] Setting ingress=true in profile "addons-035693"
	I0816 17:50:03.921029  285045 addons.go:234] Setting addon ingress=true in "addons-035693"
	I0816 17:50:03.921064  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.941079  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.893899  285045 out.go:177] * Verifying Kubernetes components...
	I0816 17:50:03.894736  285045 addons.go:69] Setting volcano=true in profile "addons-035693"
	I0816 17:50:03.894748  285045 addons.go:69] Setting volumesnapshots=true in profile "addons-035693"
	I0816 17:50:03.931122  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.933300  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.933323  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.933470  285045 config.go:182] Loaded profile config "addons-035693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:50:03.931064  285045 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-035693"
	I0816 17:50:03.954555  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.968625  285045 addons.go:234] Setting addon volcano=true in "addons-035693"
	I0816 17:50:03.968707  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.969273  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.972353  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.976497  285045 addons.go:234] Setting addon volumesnapshots=true in "addons-035693"
	I0816 17:50:03.976643  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.977195  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.981479  285045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:50:03.999113  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:04.047576  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:04.049898  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 17:50:04.052854  285045 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 17:50:04.058698  285045 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0816 17:50:04.060912  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0816 17:50:04.063690  285045 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-035693"
	I0816 17:50:04.063780  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:04.064289  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:04.064838  285045 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0816 17:50:04.065251  285045 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 17:50:04.065273  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 17:50:04.065331  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.076322  285045 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0816 17:50:04.076343  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0816 17:50:04.076414  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.077478  285045 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0816 17:50:04.084755  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0816 17:50:04.088433  285045 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 17:50:04.088462  285045 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 17:50:04.088550  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.126975  285045 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 17:50:04.127001  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0816 17:50:04.127070  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	W0816 17:50:04.136823  285045 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0816 17:50:04.149070  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0816 17:50:04.152467  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0816 17:50:04.154738  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0816 17:50:04.157272  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0816 17:50:04.161505  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0816 17:50:04.163393  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:04.167335  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0816 17:50:04.169134  285045 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0816 17:50:04.171255  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0816 17:50:04.171297  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0816 17:50:04.171460  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.171732  285045 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 17:50:04.172505  285045 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 17:50:04.172521  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0816 17:50:04.172741  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.200214  285045 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 17:50:04.202545  285045 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0816 17:50:04.204810  285045 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 17:50:04.204880  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0816 17:50:04.204983  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.209403  285045 addons.go:234] Setting addon default-storageclass=true in "addons-035693"
	I0816 17:50:04.209495  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:04.210006  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:04.237112  285045 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0816 17:50:04.238975  285045 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0816 17:50:04.239002  285045 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0816 17:50:04.239080  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.280222  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0816 17:50:04.287414  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0816 17:50:04.296460  285045 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0816 17:50:04.296558  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.332716  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.333441  285045 out.go:177]   - Using image docker.io/busybox:stable
	I0816 17:50:04.339078  285045 out.go:177]   - Using image docker.io/registry:2.8.3
	I0816 17:50:04.344994  285045 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0816 17:50:04.345033  285045 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0816 17:50:04.345015  285045 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0816 17:50:04.351493  285045 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0816 17:50:04.351516  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0816 17:50:04.351582  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.352751  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.353468  285045 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0816 17:50:04.353484  285045 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0816 17:50:04.353544  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.353955  285045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:50:04.354243  285045 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 17:50:04.354256  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0816 17:50:04.354307  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.391898  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.397502  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.426126  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.441946  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.447101  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.467243  285045 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 17:50:04.467266  285045 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 17:50:04.467328  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.490239  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.491055  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.491762  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.519442  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.525684  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.530453  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.753878  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 17:50:04.759952  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 17:50:04.795375  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0816 17:50:04.840109  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 17:50:04.893011  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0816 17:50:04.893075  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0816 17:50:04.899706  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 17:50:04.944279  285045 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0816 17:50:04.944357  285045 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0816 17:50:04.948291  285045 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0816 17:50:04.948314  285045 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0816 17:50:04.959233  285045 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0816 17:50:04.959258  285045 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0816 17:50:04.962270  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 17:50:04.969528  285045 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 17:50:04.969550  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0816 17:50:05.013737  285045 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0816 17:50:05.013825  285045 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0816 17:50:05.076188  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0816 17:50:05.076262  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0816 17:50:05.106866  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 17:50:05.167922  285045 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0816 17:50:05.167997  285045 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0816 17:50:05.178505  285045 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0816 17:50:05.178586  285045 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0816 17:50:05.193959  285045 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0816 17:50:05.194024  285045 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0816 17:50:05.196107  285045 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 17:50:05.196175  285045 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 17:50:05.200533  285045 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0816 17:50:05.200659  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0816 17:50:05.291385  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0816 17:50:05.291425  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0816 17:50:05.315226  285045 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0816 17:50:05.315267  285045 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0816 17:50:05.381627  285045 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0816 17:50:05.381655  285045 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0816 17:50:05.384905  285045 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0816 17:50:05.384931  285045 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0816 17:50:05.420850  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0816 17:50:05.420878  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0816 17:50:05.434289  285045 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 17:50:05.434315  285045 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 17:50:05.435280  285045 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0816 17:50:05.435302  285045 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0816 17:50:05.448975  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0816 17:50:05.554915  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0816 17:50:05.554955  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0816 17:50:05.564768  285045 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0816 17:50:05.564793  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0816 17:50:05.577640  285045 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0816 17:50:05.577666  285045 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0816 17:50:05.590036  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 17:50:05.595957  285045 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.241972389s)
	I0816 17:50:05.596001  285045 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.546078882s)
	I0816 17:50:05.596015  285045 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0816 17:50:05.597777  285045 node_ready.go:35] waiting up to 6m0s for node "addons-035693" to be "Ready" ...
	I0816 17:50:05.600396  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0816 17:50:05.600420  285045 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0816 17:50:05.663110  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0816 17:50:05.663137  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0816 17:50:05.666380  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0816 17:50:05.679642  285045 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0816 17:50:05.679670  285045 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0816 17:50:05.724020  285045 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 17:50:05.724046  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0816 17:50:05.769778  285045 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 17:50:05.769810  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0816 17:50:05.784980  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0816 17:50:05.785008  285045 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0816 17:50:05.868389  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 17:50:05.911331  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 17:50:05.924895  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0816 17:50:05.924923  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0816 17:50:05.948237  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0816 17:50:05.948276  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0816 17:50:05.984352  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 17:50:05.984378  285045 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0816 17:50:06.130782  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 17:50:07.153208  285045 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-035693" context rescaled to 1 replicas
	I0816 17:50:07.992134  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:09.749034  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.995070193s)
	I0816 17:50:09.749149  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.989111146s)
	I0816 17:50:09.749210  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.953754182s)
	I0816 17:50:10.175788  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:10.853353  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.953570704s)
	I0816 17:50:10.853401  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.891110182s)
	I0816 17:50:10.853440  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.746506365s)
	I0816 17:50:10.853487  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.404482969s)
	I0816 17:50:10.853949  285045 addons.go:475] Verifying addon registry=true in "addons-035693"
	I0816 17:50:10.853537  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.263470737s)
	I0816 17:50:10.854131  285045 addons.go:475] Verifying addon metrics-server=true in "addons-035693"
	I0816 17:50:10.853571  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.187164322s)
	I0816 17:50:10.854512  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.014327792s)
	I0816 17:50:10.854539  285045 addons.go:475] Verifying addon ingress=true in "addons-035693"
	I0816 17:50:10.856194  285045 out.go:177] * Verifying ingress addon...
	I0816 17:50:10.856292  285045 out.go:177] * Verifying registry addon...
	I0816 17:50:10.856339  285045 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-035693 service yakd-dashboard -n yakd-dashboard
	
	I0816 17:50:10.859728  285045 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0816 17:50:10.860653  285045 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0816 17:50:10.867769  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.999339134s)
	W0816 17:50:10.867803  285045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 17:50:10.867834  285045 retry.go:31] will retry after 188.586243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 17:50:10.867903  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.956538166s)
	W0816 17:50:10.884371  285045 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0816 17:50:10.887939  285045 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 17:50:10.888017  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:10.888654  285045 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0816 17:50:10.888707  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:11.056668  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 17:50:11.342682  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.211847634s)
	I0816 17:50:11.342768  285045 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-035693"
	I0816 17:50:11.345550  285045 out.go:177] * Verifying csi-hostpath-driver addon...
	I0816 17:50:11.348584  285045 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0816 17:50:11.435847  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:11.445367  285045 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 17:50:11.445444  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:11.448505  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:11.870817  285045 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 17:50:11.870894  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:11.885931  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:11.888013  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:11.917120  285045 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0816 17:50:11.917268  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:11.941936  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:12.131665  285045 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0816 17:50:12.191567  285045 addons.go:234] Setting addon gcp-auth=true in "addons-035693"
	I0816 17:50:12.191667  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:12.192217  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:12.218677  285045 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0816 17:50:12.218728  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:12.242594  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:12.352554  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:12.367301  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:12.368542  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:12.601639  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:12.852752  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:12.864026  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:12.864895  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:13.352495  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:13.366464  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:13.366758  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:13.852663  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:13.863624  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:13.865233  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:14.346922  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.290161191s)
	I0816 17:50:14.346994  285045 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.128298929s)
	I0816 17:50:14.349289  285045 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 17:50:14.350918  285045 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0816 17:50:14.352910  285045 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0816 17:50:14.352933  285045 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0816 17:50:14.356273  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:14.366861  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:14.368130  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:14.387707  285045 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0816 17:50:14.387799  285045 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0816 17:50:14.413340  285045 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 17:50:14.413414  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0816 17:50:14.435641  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 17:50:14.601717  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:14.853376  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:14.865634  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:14.869932  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:15.135140  285045 addons.go:475] Verifying addon gcp-auth=true in "addons-035693"
	I0816 17:50:15.139722  285045 out.go:177] * Verifying gcp-auth addon...
	I0816 17:50:15.143101  285045 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0816 17:50:15.148470  285045 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0816 17:50:15.148550  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:15.353023  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:15.366956  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:15.368104  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:15.647486  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:15.852688  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:15.864466  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:15.865718  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:16.146952  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:16.353352  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:16.365845  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:16.368164  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:16.602232  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:16.652315  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:16.852614  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:16.864555  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:16.865004  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:17.147584  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:17.353123  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:17.363878  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:17.365115  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:17.647102  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:17.852839  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:17.864116  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:17.864713  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:18.147445  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:18.353962  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:18.363939  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:18.364978  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:18.646831  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:18.852957  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:18.863965  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:18.864711  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:19.101010  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:19.148394  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:19.353427  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:19.364027  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:19.365099  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:19.646563  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:19.852220  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:19.868864  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:19.869904  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:20.147223  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:20.352384  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:20.364019  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:20.364437  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:20.647484  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:20.853178  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:20.863653  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:20.864296  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:21.147569  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:21.352834  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:21.363676  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:21.364186  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:21.600840  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:21.646510  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:21.852998  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:21.864320  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:21.865125  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:22.146699  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:22.353127  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:22.364041  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:22.365241  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:22.647476  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:22.852046  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:22.863788  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:22.864548  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:23.147123  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:23.352377  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:23.363918  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:23.364708  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:23.601198  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:23.646834  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:23.852805  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:23.863733  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:23.865151  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:24.147437  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:24.352665  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:24.364207  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:24.364834  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:24.646770  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:24.853151  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:24.864433  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:24.865195  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:25.146628  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:25.352921  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:25.364651  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:25.365065  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:25.646759  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:25.852786  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:25.863581  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:25.864661  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:26.100861  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:26.147114  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:26.352307  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:26.364602  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:26.364773  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:26.647486  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:26.852776  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:26.863873  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:26.864511  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:27.146593  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:27.353102  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:27.363761  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:27.365811  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:27.647544  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:27.852874  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:27.863545  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:27.864550  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:28.146707  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:28.351991  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:28.363719  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:28.364922  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:28.602283  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:28.646913  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:28.851892  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:28.865076  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:28.865328  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:29.148348  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:29.354066  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:29.363639  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:29.365312  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:29.647006  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:29.852538  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:29.864500  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:29.865282  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:30.149467  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:30.352761  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:30.363549  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:30.364299  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:30.646926  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:30.852150  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:30.864539  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:30.865073  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:31.101223  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:31.147117  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:31.352100  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:31.365309  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:31.365422  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:31.646607  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:31.852824  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:31.864486  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:31.865729  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:32.147178  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:32.352426  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:32.363853  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:32.364960  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:32.646513  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:32.852955  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:32.864768  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:32.865011  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:33.102251  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:33.147357  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:33.352997  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:33.369889  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:33.370322  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:33.646539  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:33.853119  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:33.870295  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:33.872076  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:34.148367  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:34.353911  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:34.364365  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:34.365682  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:34.646537  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:34.851930  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:34.863624  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:34.866458  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:35.102946  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:35.146999  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:35.352643  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:35.363233  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:35.364322  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:35.646410  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:35.852866  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:35.863946  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:35.864384  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:36.147540  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:36.352333  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:36.363496  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:36.364488  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:36.646742  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:36.852901  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:36.863597  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:36.864730  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:37.147404  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:37.353168  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:37.364472  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:37.365251  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:37.601774  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:37.647337  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:37.852377  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:37.863938  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:37.864704  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:38.146637  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:38.352283  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:38.363994  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:38.365128  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:38.647488  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:38.852647  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:38.862946  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:38.864241  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:39.147152  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:39.352126  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:39.363672  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:39.365533  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:39.647237  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:39.853085  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:39.863471  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:39.865361  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:40.101658  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:40.147453  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:40.352761  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:40.363341  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:40.364586  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:40.646829  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:40.852662  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:40.864804  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:40.865201  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:41.146635  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:41.352076  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:41.364230  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:41.364683  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:41.646284  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:41.852922  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:41.863914  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:41.865173  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:42.103239  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:42.147593  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:42.352830  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:42.364947  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:42.365877  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:42.646574  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:42.852097  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:42.863627  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:42.865514  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:43.146576  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:43.352484  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:43.363889  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:43.365628  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:43.646057  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:43.852542  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:43.864649  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:43.865488  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:44.146125  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:44.352490  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:44.363840  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:44.364917  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:44.601456  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:44.646263  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:44.852818  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:44.864056  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:44.866039  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:45.148168  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:45.353692  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:45.366681  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:45.367604  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:45.646479  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:45.852667  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:45.863580  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:45.865579  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:46.146568  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:46.352057  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:46.364347  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:46.365419  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:46.647100  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:46.853071  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:46.863652  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:46.864363  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:47.101731  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:47.146371  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:47.353031  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:47.363677  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:47.364756  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:47.648498  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:47.854674  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:47.864924  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:47.865184  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:48.147142  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:48.352943  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:48.364655  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:48.366594  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:48.647160  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:48.852711  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:48.863396  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:48.864054  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:49.146931  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:49.352015  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:49.364861  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:49.365257  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:49.601265  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:49.646901  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:49.852171  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:49.864412  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:49.866114  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:50.147500  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:50.361770  285045 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 17:50:50.361848  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:50.381060  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:50.381329  285045 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 17:50:50.381371  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:50.654928  285045 node_ready.go:49] node "addons-035693" has status "Ready":"True"
	I0816 17:50:50.655001  285045 node_ready.go:38] duration metric: took 45.057196224s for node "addons-035693" to be "Ready" ...
	I0816 17:50:50.655042  285045 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:50:50.668394  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:50.685870  285045 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rbz4z" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:50.854186  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:50.865103  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:50.865403  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:51.162401  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:51.360715  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:51.367185  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:51.367618  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:51.646949  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:51.855100  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:51.863976  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:51.864665  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:52.147813  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:52.398801  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:52.399871  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:52.411686  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:52.647636  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:52.694885  285045 pod_ready.go:103] pod "coredns-6f6b679f8f-rbz4z" in "kube-system" namespace has status "Ready":"False"
	I0816 17:50:52.874250  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:52.880709  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:52.882252  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:53.147593  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:53.381303  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:53.466159  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:53.468134  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:53.646648  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:53.696610  285045 pod_ready.go:93] pod "coredns-6f6b679f8f-rbz4z" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:53.696685  285045 pod_ready.go:82] duration metric: took 3.010740776s for pod "coredns-6f6b679f8f-rbz4z" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.696722  285045 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.703115  285045 pod_ready.go:93] pod "etcd-addons-035693" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:53.703181  285045 pod_ready.go:82] duration metric: took 6.428986ms for pod "etcd-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.703210  285045 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.717140  285045 pod_ready.go:93] pod "kube-apiserver-addons-035693" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:53.717215  285045 pod_ready.go:82] duration metric: took 13.984627ms for pod "kube-apiserver-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.717241  285045 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.729656  285045 pod_ready.go:93] pod "kube-controller-manager-addons-035693" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:53.729723  285045 pod_ready.go:82] duration metric: took 12.461066ms for pod "kube-controller-manager-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.729751  285045 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gk9xc" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.740221  285045 pod_ready.go:93] pod "kube-proxy-gk9xc" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:53.740295  285045 pod_ready.go:82] duration metric: took 10.524667ms for pod "kube-proxy-gk9xc" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.740321  285045 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.854423  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:53.867652  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:53.869247  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:54.092453  285045 pod_ready.go:93] pod "kube-scheduler-addons-035693" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:54.092534  285045 pod_ready.go:82] duration metric: took 352.19134ms for pod "kube-scheduler-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:54.092584  285045 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:54.152350  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:54.354432  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:54.367461  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:54.373575  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:54.647430  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:54.855566  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:54.867874  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:54.869224  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:55.148416  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:55.354014  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:55.376324  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:55.377729  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:55.647150  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:55.864347  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:55.869854  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:55.872677  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:56.100007  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:50:56.148850  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:56.354281  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:56.364669  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:56.366005  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:56.647436  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:56.853437  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:56.864132  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:56.865807  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:57.147105  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:57.353300  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:57.371437  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:57.372589  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:57.647311  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:57.854955  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:57.869292  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:57.871730  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:58.116422  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:50:58.147738  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:58.353610  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:58.368324  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:58.371470  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:58.647816  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:58.854110  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:58.865568  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:58.867414  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:59.147748  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:59.355375  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:59.365193  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:59.371358  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:59.648182  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:59.879769  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:59.890345  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:59.891917  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:00.121083  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:00.151927  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:00.359406  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:00.367672  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:51:00.371519  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:00.647682  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:00.856113  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:00.870764  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:51:00.872065  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:01.148808  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:01.361976  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:01.381777  285045 kapi.go:107] duration metric: took 50.52111974s to wait for kubernetes.io/minikube-addons=registry ...
	I0816 17:51:01.382947  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:01.656093  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:01.853183  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:01.863809  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:02.147012  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:02.354520  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:02.365529  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:02.602207  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:02.647477  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:02.862035  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:02.880385  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:03.147589  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:03.354102  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:03.364177  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:03.647858  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:03.862837  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:03.867631  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:04.147267  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:04.362564  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:04.365068  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:04.647733  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:04.853960  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:04.864844  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:05.100269  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:05.147109  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:05.357997  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:05.364319  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:05.647443  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:05.853891  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:05.864263  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:06.147695  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:06.360106  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:06.366377  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:06.647089  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:06.854089  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:06.864280  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:07.148091  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:07.362526  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:07.383316  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:07.599540  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:07.648080  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:07.854168  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:07.866208  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:08.147834  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:08.354472  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:08.364431  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:08.674021  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:08.855090  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:08.864753  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:09.146373  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:09.358309  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:09.366376  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:09.647286  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:09.854689  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:09.863793  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:10.100505  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:10.147191  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:10.354491  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:10.371873  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:10.647548  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:10.853676  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:10.864329  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:11.148143  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:11.355051  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:11.383508  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:11.647123  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:11.855727  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:11.864799  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:12.101201  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:12.147483  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:12.354516  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:12.364256  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:12.647634  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:12.853866  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:12.865522  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:13.147085  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:13.357241  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:13.370668  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:13.648501  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:13.853972  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:13.865543  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:14.105406  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:14.149218  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:14.354265  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:14.363944  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:14.647914  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:14.853703  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:14.864264  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:15.147559  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:15.355786  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:15.364385  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:15.648220  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:15.854651  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:15.865556  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:16.147565  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:16.354505  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:16.365487  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:16.600425  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:16.647157  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:16.855260  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:16.866078  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:17.147976  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:17.354504  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:17.365453  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:17.649461  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:17.860811  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:17.867091  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:18.147113  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:18.353683  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:18.364744  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:18.647586  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:18.854338  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:18.864978  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:19.100434  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:19.147876  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:19.353834  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:19.364548  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:19.647377  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:19.854611  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:19.865182  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:20.147838  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:20.354058  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:20.381672  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:20.647689  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:20.854266  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:20.866839  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:21.105357  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:21.147832  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:21.354478  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:21.372022  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:21.647282  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:21.853838  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:21.869542  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:22.147038  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:22.353799  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:22.364016  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:22.646531  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:22.855142  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:22.865256  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:23.147244  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:23.354654  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:23.366452  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:23.601852  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:23.648171  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:23.855905  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:23.954707  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:24.147135  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:24.353204  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:24.364951  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:24.647645  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:24.853845  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:24.864558  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:25.148205  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:25.353933  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:25.364146  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:25.646839  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:25.854192  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:25.864475  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:26.100324  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:26.146914  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:26.357422  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:26.364738  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:26.647825  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:26.856113  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:26.864979  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:27.148216  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:27.353495  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:27.364675  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:27.650669  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:27.854872  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:27.864273  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:28.146994  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:28.354251  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:28.364897  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:28.601779  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:28.648191  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:28.855330  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:28.864816  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:29.154052  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:29.356117  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:29.364834  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:29.647841  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:29.854699  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:29.864729  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:30.147558  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:30.353855  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:30.366512  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:30.653687  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:30.855447  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:30.865076  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:31.100424  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:31.147468  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:31.355209  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:31.364250  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:31.649077  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:31.853515  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:31.864751  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:32.147370  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:32.354256  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:32.364185  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:32.648917  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:32.854193  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:32.864502  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:33.149376  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:33.359353  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:33.365975  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:33.600215  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:33.646999  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:33.854939  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:33.864887  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:34.148664  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:34.355152  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:34.363993  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:34.646907  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:34.854193  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:34.864635  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:35.147232  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:35.353890  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:35.364395  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:35.647363  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:35.855025  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:35.864812  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:36.101431  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:36.148967  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:36.354614  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:36.366744  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:36.647758  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:36.855666  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:36.865514  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:37.147555  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:37.355058  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:37.364605  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:37.651097  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:37.854423  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:37.864154  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:38.147225  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:38.353461  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:38.365764  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:38.598836  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:38.646863  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:38.853856  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:38.864046  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:39.150957  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:39.354006  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:39.364804  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:39.647305  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:39.870022  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:39.880372  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:40.146923  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:40.354690  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:40.364302  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:40.602195  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:40.647883  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:40.860433  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:40.886975  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:41.174060  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:41.356139  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:41.365699  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:41.661435  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:41.862306  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:41.865526  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:42.160109  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:42.357369  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:42.365075  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:42.646561  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:42.853626  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:42.864032  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:43.099898  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:43.147916  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:43.353997  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:43.364431  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:43.647808  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:43.854796  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:43.865323  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:44.146837  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:44.354643  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:44.364493  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:44.649019  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:44.853742  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:44.864619  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:45.123716  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:45.166897  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:45.410276  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:45.412011  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:45.652252  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:45.855105  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:45.864072  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:46.147818  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:46.357936  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:46.367566  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:46.646395  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:46.854799  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:46.866170  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:47.148368  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:47.354900  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:47.374701  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:47.600502  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:47.652277  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:47.861296  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:47.870863  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:48.158936  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:48.356329  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:48.367114  285045 kapi.go:107] duration metric: took 1m37.50738507s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0816 17:51:48.647142  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:48.854979  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:49.146753  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:49.354039  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:49.601386  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:49.693167  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:49.854050  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:50.148315  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:50.353268  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:50.646860  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:50.854277  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:51.149149  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:51.363022  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:51.601696  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:51.648531  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:51.853798  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:52.148017  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:52.353357  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:52.649751  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:52.855277  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:53.148870  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:53.358278  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:53.647023  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:53.854289  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:54.100268  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:54.147149  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:54.354672  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:54.648752  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:54.853943  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:55.148502  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:55.355270  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:55.648737  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:55.861222  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:56.113451  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:56.148300  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:56.358511  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:56.649027  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:56.853754  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:57.147643  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:57.354241  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:57.646812  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:57.854290  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:58.154998  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:58.354108  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:58.600942  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:58.646862  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:58.854252  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:59.147596  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:59.354095  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:59.647250  285045 kapi.go:107] duration metric: took 1m44.504161402s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0816 17:51:59.649340  285045 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-035693 cluster.
	I0816 17:51:59.651040  285045 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0816 17:51:59.652803  285045 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0816 17:51:59.853009  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:52:00.357790  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:52:00.621925  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:52:00.854564  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:52:01.353962  285045 kapi.go:107] duration metric: took 1m50.00539667s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0816 17:52:01.356271  285045 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, metrics-server, yakd, inspektor-gadget, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0816 17:52:01.358985  285045 addons.go:510] duration metric: took 1m57.47047744s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin metrics-server yakd inspektor-gadget default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0816 17:52:03.100224  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:52:05.599371  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:52:08.099221  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:52:10.100834  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:52:11.099041  285045 pod_ready.go:93] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"True"
	I0816 17:52:11.099074  285045 pod_ready.go:82] duration metric: took 1m17.006461408s for pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace to be "Ready" ...
	I0816 17:52:11.099089  285045 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jsx2r" in "kube-system" namespace to be "Ready" ...
	I0816 17:52:11.104870  285045 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-jsx2r" in "kube-system" namespace has status "Ready":"True"
	I0816 17:52:11.104898  285045 pod_ready.go:82] duration metric: took 5.801913ms for pod "nvidia-device-plugin-daemonset-jsx2r" in "kube-system" namespace to be "Ready" ...
	I0816 17:52:11.104922  285045 pod_ready.go:39] duration metric: took 1m20.449849672s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:52:11.104938  285045 api_server.go:52] waiting for apiserver process to appear ...
	I0816 17:52:11.104974  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 17:52:11.105042  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 17:52:11.163469  285045 cri.go:89] found id: "16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:11.163492  285045 cri.go:89] found id: ""
	I0816 17:52:11.163502  285045 logs.go:276] 1 containers: [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2]
	I0816 17:52:11.163596  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.166996  285045 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 17:52:11.167077  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 17:52:11.206891  285045 cri.go:89] found id: "561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:11.206913  285045 cri.go:89] found id: ""
	I0816 17:52:11.206922  285045 logs.go:276] 1 containers: [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea]
	I0816 17:52:11.206978  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.210709  285045 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 17:52:11.210781  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 17:52:11.250990  285045 cri.go:89] found id: "012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:11.251064  285045 cri.go:89] found id: ""
	I0816 17:52:11.251100  285045 logs.go:276] 1 containers: [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52]
	I0816 17:52:11.251195  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.256402  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 17:52:11.256473  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 17:52:11.299843  285045 cri.go:89] found id: "37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:11.299867  285045 cri.go:89] found id: ""
	I0816 17:52:11.299875  285045 logs.go:276] 1 containers: [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db]
	I0816 17:52:11.299931  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.303593  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 17:52:11.303668  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 17:52:11.347845  285045 cri.go:89] found id: "2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:11.347865  285045 cri.go:89] found id: ""
	I0816 17:52:11.347873  285045 logs.go:276] 1 containers: [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3]
	I0816 17:52:11.347928  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.351603  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 17:52:11.351723  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 17:52:11.400968  285045 cri.go:89] found id: "8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:11.401037  285045 cri.go:89] found id: ""
	I0816 17:52:11.401058  285045 logs.go:276] 1 containers: [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e]
	I0816 17:52:11.401145  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.405081  285045 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 17:52:11.405200  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 17:52:11.448800  285045 cri.go:89] found id: "3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:11.448869  285045 cri.go:89] found id: ""
	I0816 17:52:11.448884  285045 logs.go:276] 1 containers: [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac]
	I0816 17:52:11.448958  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.452524  285045 logs.go:123] Gathering logs for coredns [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52] ...
	I0816 17:52:11.452551  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:11.514732  285045 logs.go:123] Gathering logs for kube-scheduler [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db] ...
	I0816 17:52:11.514775  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:11.566526  285045 logs.go:123] Gathering logs for CRI-O ...
	I0816 17:52:11.566562  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 17:52:11.667037  285045 logs.go:123] Gathering logs for dmesg ...
	I0816 17:52:11.667076  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 17:52:11.683945  285045 logs.go:123] Gathering logs for describe nodes ...
	I0816 17:52:11.683977  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 17:52:11.885764  285045 logs.go:123] Gathering logs for kube-apiserver [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2] ...
	I0816 17:52:11.885796  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:11.949048  285045 logs.go:123] Gathering logs for kube-controller-manager [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e] ...
	I0816 17:52:11.949087  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:12.029026  285045 logs.go:123] Gathering logs for kindnet [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac] ...
	I0816 17:52:12.029089  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:12.125610  285045 logs.go:123] Gathering logs for container status ...
	I0816 17:52:12.125700  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 17:52:12.209774  285045 logs.go:123] Gathering logs for kubelet ...
	I0816 17:52:12.209809  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 17:52:12.259765  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.299922    1482 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.260009  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.299969    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.260189  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300343    1482 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.260402  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300373    1482 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.260622  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300698    1482 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.260853  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.261036  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.261261  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.261430  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.261637  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:12.300441  285045 logs.go:123] Gathering logs for etcd [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea] ...
	I0816 17:52:12.300470  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:12.351415  285045 logs.go:123] Gathering logs for kube-proxy [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3] ...
	I0816 17:52:12.351448  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:12.395004  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:12.395037  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0816 17:52:12.395142  285045 out.go:270] X Problems detected in kubelet:
	W0816 17:52:12.395162  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.395175  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.395183  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.395195  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.395201  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:12.395224  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:12.395232  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:52:22.396545  285045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:52:22.412549  285045 api_server.go:72] duration metric: took 2m18.524415052s to wait for apiserver process to appear ...
	I0816 17:52:22.412593  285045 api_server.go:88] waiting for apiserver healthz status ...
	I0816 17:52:22.412632  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 17:52:22.412697  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 17:52:22.461140  285045 cri.go:89] found id: "16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:22.461166  285045 cri.go:89] found id: ""
	I0816 17:52:22.461175  285045 logs.go:276] 1 containers: [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2]
	I0816 17:52:22.461234  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.465222  285045 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 17:52:22.465290  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 17:52:22.512115  285045 cri.go:89] found id: "561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:22.512136  285045 cri.go:89] found id: ""
	I0816 17:52:22.512144  285045 logs.go:276] 1 containers: [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea]
	I0816 17:52:22.512201  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.515891  285045 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 17:52:22.515970  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 17:52:22.556457  285045 cri.go:89] found id: "012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:22.556478  285045 cri.go:89] found id: ""
	I0816 17:52:22.556499  285045 logs.go:276] 1 containers: [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52]
	I0816 17:52:22.556556  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.560650  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 17:52:22.560722  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 17:52:22.600623  285045 cri.go:89] found id: "37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:22.600649  285045 cri.go:89] found id: ""
	I0816 17:52:22.600667  285045 logs.go:276] 1 containers: [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db]
	I0816 17:52:22.600728  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.604611  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 17:52:22.604693  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 17:52:22.652837  285045 cri.go:89] found id: "2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:22.652861  285045 cri.go:89] found id: ""
	I0816 17:52:22.652872  285045 logs.go:276] 1 containers: [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3]
	I0816 17:52:22.652947  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.656998  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 17:52:22.657093  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 17:52:22.704427  285045 cri.go:89] found id: "8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:22.704500  285045 cri.go:89] found id: ""
	I0816 17:52:22.704514  285045 logs.go:276] 1 containers: [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e]
	I0816 17:52:22.704613  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.708328  285045 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 17:52:22.708395  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 17:52:22.749812  285045 cri.go:89] found id: "3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:22.749836  285045 cri.go:89] found id: ""
	I0816 17:52:22.749844  285045 logs.go:276] 1 containers: [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac]
	I0816 17:52:22.749923  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.753783  285045 logs.go:123] Gathering logs for describe nodes ...
	I0816 17:52:22.753814  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 17:52:22.903048  285045 logs.go:123] Gathering logs for kube-scheduler [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db] ...
	I0816 17:52:22.903081  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:22.959876  285045 logs.go:123] Gathering logs for kube-controller-manager [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e] ...
	I0816 17:52:22.959910  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:23.033378  285045 logs.go:123] Gathering logs for kindnet [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac] ...
	I0816 17:52:23.033416  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:23.079523  285045 logs.go:123] Gathering logs for kubelet ...
	I0816 17:52:23.079557  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 17:52:23.130020  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.299922    1482 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.130290  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.299969    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.130469  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300343    1482 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.130682  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300373    1482 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.130870  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300698    1482 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.131101  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.131287  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.131521  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.131740  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.131956  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:23.172004  285045 logs.go:123] Gathering logs for dmesg ...
	I0816 17:52:23.172042  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 17:52:23.189120  285045 logs.go:123] Gathering logs for kube-apiserver [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2] ...
	I0816 17:52:23.189151  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:23.262456  285045 logs.go:123] Gathering logs for etcd [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea] ...
	I0816 17:52:23.262488  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:23.312676  285045 logs.go:123] Gathering logs for coredns [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52] ...
	I0816 17:52:23.312715  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:23.362275  285045 logs.go:123] Gathering logs for kube-proxy [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3] ...
	I0816 17:52:23.362306  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:23.408344  285045 logs.go:123] Gathering logs for CRI-O ...
	I0816 17:52:23.408376  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 17:52:23.510483  285045 logs.go:123] Gathering logs for container status ...
	I0816 17:52:23.510570  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 17:52:23.577126  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:23.577157  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0816 17:52:23.577241  285045 out.go:270] X Problems detected in kubelet:
	W0816 17:52:23.577255  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.577285  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.577294  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.577301  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.577308  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:23.577320  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:23.577327  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:52:33.578632  285045 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 17:52:33.586611  285045 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0816 17:52:33.587709  285045 api_server.go:141] control plane version: v1.31.0
	I0816 17:52:33.587737  285045 api_server.go:131] duration metric: took 11.175137198s to wait for apiserver health ...
	I0816 17:52:33.587746  285045 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 17:52:33.587768  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 17:52:33.587825  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 17:52:33.624854  285045 cri.go:89] found id: "16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:33.624879  285045 cri.go:89] found id: ""
	I0816 17:52:33.624896  285045 logs.go:276] 1 containers: [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2]
	I0816 17:52:33.624955  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.628433  285045 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 17:52:33.628500  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 17:52:33.668071  285045 cri.go:89] found id: "561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:33.668091  285045 cri.go:89] found id: ""
	I0816 17:52:33.668100  285045 logs.go:276] 1 containers: [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea]
	I0816 17:52:33.668156  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.671765  285045 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 17:52:33.671833  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 17:52:33.714887  285045 cri.go:89] found id: "012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:33.714906  285045 cri.go:89] found id: ""
	I0816 17:52:33.714915  285045 logs.go:276] 1 containers: [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52]
	I0816 17:52:33.714973  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.718624  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 17:52:33.718692  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 17:52:33.759957  285045 cri.go:89] found id: "37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:33.759982  285045 cri.go:89] found id: ""
	I0816 17:52:33.759991  285045 logs.go:276] 1 containers: [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db]
	I0816 17:52:33.760047  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.763595  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 17:52:33.763665  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 17:52:33.804084  285045 cri.go:89] found id: "2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:33.804111  285045 cri.go:89] found id: ""
	I0816 17:52:33.804119  285045 logs.go:276] 1 containers: [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3]
	I0816 17:52:33.804177  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.808159  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 17:52:33.808233  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 17:52:33.847110  285045 cri.go:89] found id: "8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:33.847132  285045 cri.go:89] found id: ""
	I0816 17:52:33.847140  285045 logs.go:276] 1 containers: [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e]
	I0816 17:52:33.847232  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.851158  285045 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 17:52:33.851280  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 17:52:33.890592  285045 cri.go:89] found id: "3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:33.890622  285045 cri.go:89] found id: ""
	I0816 17:52:33.890630  285045 logs.go:276] 1 containers: [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac]
	I0816 17:52:33.890692  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.894418  285045 logs.go:123] Gathering logs for kubelet ...
	I0816 17:52:33.894453  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 17:52:33.940428  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.299922    1482 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:33.940703  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.299969    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:33.940881  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300343    1482 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-035693' and this object
	W0816 17:52:33.941099  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300373    1482 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:33.941287  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300698    1482 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:33.941514  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:33.941702  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:33.941929  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:33.942131  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:33.942353  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:33.984453  285045 logs.go:123] Gathering logs for dmesg ...
	I0816 17:52:33.984488  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 17:52:34.000832  285045 logs.go:123] Gathering logs for describe nodes ...
	I0816 17:52:34.000862  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 17:52:34.147169  285045 logs.go:123] Gathering logs for kube-apiserver [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2] ...
	I0816 17:52:34.147198  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:34.200849  285045 logs.go:123] Gathering logs for coredns [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52] ...
	I0816 17:52:34.200884  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:34.250310  285045 logs.go:123] Gathering logs for kube-controller-manager [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e] ...
	I0816 17:52:34.250341  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:34.321425  285045 logs.go:123] Gathering logs for etcd [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea] ...
	I0816 17:52:34.321464  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:34.370297  285045 logs.go:123] Gathering logs for kube-scheduler [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db] ...
	I0816 17:52:34.370371  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:34.429642  285045 logs.go:123] Gathering logs for kube-proxy [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3] ...
	I0816 17:52:34.429675  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:34.470183  285045 logs.go:123] Gathering logs for kindnet [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac] ...
	I0816 17:52:34.470214  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:34.517728  285045 logs.go:123] Gathering logs for CRI-O ...
	I0816 17:52:34.517767  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 17:52:34.613499  285045 logs.go:123] Gathering logs for container status ...
	I0816 17:52:34.613537  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 17:52:34.664287  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:34.664316  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0816 17:52:34.664462  285045 out.go:270] X Problems detected in kubelet:
	W0816 17:52:34.664480  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:34.664626  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:34.664637  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:34.664649  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:34.664657  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:34.664663  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:34.664670  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:52:44.677793  285045 system_pods.go:59] 18 kube-system pods found
	I0816 17:52:44.677834  285045 system_pods.go:61] "coredns-6f6b679f8f-rbz4z" [92644452-aa36-4753-a264-a18cccb9492c] Running
	I0816 17:52:44.677842  285045 system_pods.go:61] "csi-hostpath-attacher-0" [664885ca-b697-49ae-880d-8ecc99fc3626] Running
	I0816 17:52:44.677849  285045 system_pods.go:61] "csi-hostpath-resizer-0" [2f444058-2a11-47d1-9aa5-5819a61fd0eb] Running
	I0816 17:52:44.677854  285045 system_pods.go:61] "csi-hostpathplugin-zhj5f" [f51366cb-7c4d-495a-82b3-e94e9f6e557a] Running
	I0816 17:52:44.677859  285045 system_pods.go:61] "etcd-addons-035693" [921b06ac-bc6a-42f2-b195-bf0df0c41429] Running
	I0816 17:52:44.677863  285045 system_pods.go:61] "kindnet-ss96t" [a57b0d98-03aa-45a1-a52d-fa5c7752f339] Running
	I0816 17:52:44.677868  285045 system_pods.go:61] "kube-apiserver-addons-035693" [46468507-7fa1-48a0-86a7-bb8c24da898a] Running
	I0816 17:52:44.677898  285045 system_pods.go:61] "kube-controller-manager-addons-035693" [437a7398-1257-4c14-9bdb-dc231abacfc3] Running
	I0816 17:52:44.677903  285045 system_pods.go:61] "kube-ingress-dns-minikube" [678e56e7-144e-4853-bb85-8157ca9cdd5d] Running
	I0816 17:52:44.677908  285045 system_pods.go:61] "kube-proxy-gk9xc" [fdb8dfd7-8793-4882-9b5f-d512e5caff6f] Running
	I0816 17:52:44.677912  285045 system_pods.go:61] "kube-scheduler-addons-035693" [9996406e-7cbf-43cf-8100-4a4e1fed2cb7] Running
	I0816 17:52:44.677916  285045 system_pods.go:61] "metrics-server-8988944d9-ssk4x" [0bdf104e-0061-4330-aaa3-3ed64ee249e7] Running
	I0816 17:52:44.677920  285045 system_pods.go:61] "nvidia-device-plugin-daemonset-jsx2r" [c4f0b8cd-7cfb-4b35-b194-ec9b1febfd6b] Running
	I0816 17:52:44.677924  285045 system_pods.go:61] "registry-6fb4cdfc84-tm8w6" [7a1098d6-9eed-44ed-b050-d7eb7f621f53] Running
	I0816 17:52:44.677927  285045 system_pods.go:61] "registry-proxy-t2nrw" [b85d1b9d-5cbc-4b35-a578-9eb458257f07] Running
	I0816 17:52:44.677931  285045 system_pods.go:61] "snapshot-controller-56fcc65765-cnwmk" [1d49b80b-be25-4e9d-ba9b-44170fa68be0] Running
	I0816 17:52:44.677935  285045 system_pods.go:61] "snapshot-controller-56fcc65765-gr2ps" [1efa743e-f16d-4942-8095-a87be8bd0e66] Running
	I0816 17:52:44.677939  285045 system_pods.go:61] "storage-provisioner" [7ce79565-40dd-4899-9f49-003e0e94fdd9] Running
	I0816 17:52:44.677945  285045 system_pods.go:74] duration metric: took 11.090193132s to wait for pod list to return data ...
	I0816 17:52:44.677952  285045 default_sa.go:34] waiting for default service account to be created ...
	I0816 17:52:44.681217  285045 default_sa.go:45] found service account: "default"
	I0816 17:52:44.681245  285045 default_sa.go:55] duration metric: took 3.28594ms for default service account to be created ...
	I0816 17:52:44.681255  285045 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 17:52:44.691915  285045 system_pods.go:86] 18 kube-system pods found
	I0816 17:52:44.692014  285045 system_pods.go:89] "coredns-6f6b679f8f-rbz4z" [92644452-aa36-4753-a264-a18cccb9492c] Running
	I0816 17:52:44.692038  285045 system_pods.go:89] "csi-hostpath-attacher-0" [664885ca-b697-49ae-880d-8ecc99fc3626] Running
	I0816 17:52:44.692082  285045 system_pods.go:89] "csi-hostpath-resizer-0" [2f444058-2a11-47d1-9aa5-5819a61fd0eb] Running
	I0816 17:52:44.692107  285045 system_pods.go:89] "csi-hostpathplugin-zhj5f" [f51366cb-7c4d-495a-82b3-e94e9f6e557a] Running
	I0816 17:52:44.692127  285045 system_pods.go:89] "etcd-addons-035693" [921b06ac-bc6a-42f2-b195-bf0df0c41429] Running
	I0816 17:52:44.692161  285045 system_pods.go:89] "kindnet-ss96t" [a57b0d98-03aa-45a1-a52d-fa5c7752f339] Running
	I0816 17:52:44.692186  285045 system_pods.go:89] "kube-apiserver-addons-035693" [46468507-7fa1-48a0-86a7-bb8c24da898a] Running
	I0816 17:52:44.692205  285045 system_pods.go:89] "kube-controller-manager-addons-035693" [437a7398-1257-4c14-9bdb-dc231abacfc3] Running
	I0816 17:52:44.692244  285045 system_pods.go:89] "kube-ingress-dns-minikube" [678e56e7-144e-4853-bb85-8157ca9cdd5d] Running
	I0816 17:52:44.692265  285045 system_pods.go:89] "kube-proxy-gk9xc" [fdb8dfd7-8793-4882-9b5f-d512e5caff6f] Running
	I0816 17:52:44.692284  285045 system_pods.go:89] "kube-scheduler-addons-035693" [9996406e-7cbf-43cf-8100-4a4e1fed2cb7] Running
	I0816 17:52:44.692296  285045 system_pods.go:89] "metrics-server-8988944d9-ssk4x" [0bdf104e-0061-4330-aaa3-3ed64ee249e7] Running
	I0816 17:52:44.692301  285045 system_pods.go:89] "nvidia-device-plugin-daemonset-jsx2r" [c4f0b8cd-7cfb-4b35-b194-ec9b1febfd6b] Running
	I0816 17:52:44.692305  285045 system_pods.go:89] "registry-6fb4cdfc84-tm8w6" [7a1098d6-9eed-44ed-b050-d7eb7f621f53] Running
	I0816 17:52:44.692309  285045 system_pods.go:89] "registry-proxy-t2nrw" [b85d1b9d-5cbc-4b35-a578-9eb458257f07] Running
	I0816 17:52:44.692315  285045 system_pods.go:89] "snapshot-controller-56fcc65765-cnwmk" [1d49b80b-be25-4e9d-ba9b-44170fa68be0] Running
	I0816 17:52:44.692319  285045 system_pods.go:89] "snapshot-controller-56fcc65765-gr2ps" [1efa743e-f16d-4942-8095-a87be8bd0e66] Running
	I0816 17:52:44.692323  285045 system_pods.go:89] "storage-provisioner" [7ce79565-40dd-4899-9f49-003e0e94fdd9] Running
	I0816 17:52:44.692334  285045 system_pods.go:126] duration metric: took 11.073024ms to wait for k8s-apps to be running ...
	I0816 17:52:44.692343  285045 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 17:52:44.692404  285045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:52:44.705414  285045 system_svc.go:56] duration metric: took 13.060632ms WaitForService to wait for kubelet
	I0816 17:52:44.705445  285045 kubeadm.go:582] duration metric: took 2m40.817316329s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:52:44.705469  285045 node_conditions.go:102] verifying NodePressure condition ...
	I0816 17:52:44.709137  285045 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0816 17:52:44.709188  285045 node_conditions.go:123] node cpu capacity is 2
	I0816 17:52:44.709200  285045 node_conditions.go:105] duration metric: took 3.724607ms to run NodePressure ...
	I0816 17:52:44.709213  285045 start.go:241] waiting for startup goroutines ...
	I0816 17:52:44.709221  285045 start.go:246] waiting for cluster config update ...
	I0816 17:52:44.709238  285045 start.go:255] writing updated cluster config ...
	I0816 17:52:44.709541  285045 ssh_runner.go:195] Run: rm -f paused
	I0816 17:52:45.116965  285045 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 17:52:45.120058  285045 out.go:177] * Done! kubectl is now configured to use "addons-035693" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 17:56:38 addons-035693 crio[963]: time="2024-08-16 17:56:38.059311489Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8d439e74-bb89-43c1-913d-18e2aad7446b name=/runtime.v1.ImageService/ImageStatus
	Aug 16 17:56:38 addons-035693 crio[963]: time="2024-08-16 17:56:38.061252017Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-tjfkk/hello-world-app" id=45d566f8-b462-4166-a178-3eacf851646f name=/runtime.v1.RuntimeService/CreateContainer
	Aug 16 17:56:38 addons-035693 crio[963]: time="2024-08-16 17:56:38.061473472Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 16 17:56:38 addons-035693 crio[963]: time="2024-08-16 17:56:38.091051452Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ff0015cdff5e5aef8e73a5718c3a41979764a5b2726231011cd2fb42b7775f02/merged/etc/passwd: no such file or directory"
	Aug 16 17:56:38 addons-035693 crio[963]: time="2024-08-16 17:56:38.091258729Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ff0015cdff5e5aef8e73a5718c3a41979764a5b2726231011cd2fb42b7775f02/merged/etc/group: no such file or directory"
	Aug 16 17:56:38 addons-035693 crio[963]: time="2024-08-16 17:56:38.134215639Z" level=info msg="Created container a8a7185f682c17a58ab8bdd186b92acaa4c1070f0add9266b85924a0137c35d9: default/hello-world-app-55bf9c44b4-tjfkk/hello-world-app" id=45d566f8-b462-4166-a178-3eacf851646f name=/runtime.v1.RuntimeService/CreateContainer
	Aug 16 17:56:38 addons-035693 crio[963]: time="2024-08-16 17:56:38.134962072Z" level=info msg="Starting container: a8a7185f682c17a58ab8bdd186b92acaa4c1070f0add9266b85924a0137c35d9" id=629db2b4-ac16-4689-82d1-cfcb89034902 name=/runtime.v1.RuntimeService/StartContainer
	Aug 16 17:56:38 addons-035693 crio[963]: time="2024-08-16 17:56:38.141651607Z" level=info msg="Started container" PID=8428 containerID=a8a7185f682c17a58ab8bdd186b92acaa4c1070f0add9266b85924a0137c35d9 description=default/hello-world-app-55bf9c44b4-tjfkk/hello-world-app id=629db2b4-ac16-4689-82d1-cfcb89034902 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b7dfdd1af7e75a93cdad5ed07ae661dd336f29e2da2c7265c16a28dcc76c35fb
	Aug 16 17:56:38 addons-035693 crio[963]: time="2024-08-16 17:56:38.583686490Z" level=info msg="Removing container: 06c72a4a91fcb16caaac101aa8bf863c68925a07c6f9fc8a918044dc3d86cc07" id=68ce5b31-a1e4-4e08-bd9d-3297c8e76acb name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 16 17:56:38 addons-035693 crio[963]: time="2024-08-16 17:56:38.605515979Z" level=info msg="Removed container 06c72a4a91fcb16caaac101aa8bf863c68925a07c6f9fc8a918044dc3d86cc07: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=68ce5b31-a1e4-4e08-bd9d-3297c8e76acb name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 16 17:56:40 addons-035693 crio[963]: time="2024-08-16 17:56:40.308551422Z" level=info msg="Stopping container: 085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5 (timeout: 2s)" id=1b05f1d5-e7f0-4606-85b8-bb32e50ea6c2 name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.315565862Z" level=warning msg="Stopping container 085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=1b05f1d5-e7f0-4606-85b8-bb32e50ea6c2 name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 17:56:42 addons-035693 conmon[4714]: conmon 085c1bb71ef37abe2844 <ninfo>: container 4725 exited with status 137
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.455127461Z" level=info msg="Stopped container 085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5: ingress-nginx/ingress-nginx-controller-bc57996ff-2xxm6/controller" id=1b05f1d5-e7f0-4606-85b8-bb32e50ea6c2 name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.455753132Z" level=info msg="Stopping pod sandbox: a629521fd300726232807d723b2f60943451dc5c9a417ecdf73b2eed7c60bba1" id=3d7a32bc-422a-430c-bbc5-84effbb695c0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.459378813Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-Z7WGR3465HMCS5KF - [0:0]\n:KUBE-HP-GHDZLHUB6YNDXZ43 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-GHDZLHUB6YNDXZ43\n-X KUBE-HP-Z7WGR3465HMCS5KF\nCOMMIT\n"
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.461709402Z" level=info msg="Closing host port tcp:80"
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.461771605Z" level=info msg="Closing host port tcp:443"
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.463434826Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.463472651Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.463663666Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-2xxm6 Namespace:ingress-nginx ID:a629521fd300726232807d723b2f60943451dc5c9a417ecdf73b2eed7c60bba1 UID:f95f446b-c51a-47e5-9fc3-68ca0527411b NetNS:/var/run/netns/5a0eeb41-5c4f-4e15-b1be-2633e7f22b5d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.463806935Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-2xxm6 from CNI network \"kindnet\" (type=ptp)"
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.491018051Z" level=info msg="Stopped pod sandbox: a629521fd300726232807d723b2f60943451dc5c9a417ecdf73b2eed7c60bba1" id=3d7a32bc-422a-430c-bbc5-84effbb695c0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.592133389Z" level=info msg="Removing container: 085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5" id=ad8d8d23-7a70-4f6c-8f7b-866c853217c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 16 17:56:42 addons-035693 crio[963]: time="2024-08-16 17:56:42.608522067Z" level=info msg="Removed container 085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5: ingress-nginx/ingress-nginx-controller-bc57996ff-2xxm6/controller" id=ad8d8d23-7a70-4f6c-8f7b-866c853217c0 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8a7185f682c1       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   b7dfdd1af7e75       hello-world-app-55bf9c44b4-tjfkk
	32ce343b9cee4       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                              2 minutes ago       Running             nginx                     0                   bfddf0315f927       nginx
	8ca4276cff561       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   a53ed5ae8418d       busybox
	2970f6e69eeae       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             5 minutes ago       Running             local-path-provisioner    0                   357c57ee43388       local-path-provisioner-86d989889c-q9262
	4ea49a234283c       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             5 minutes ago       Exited              patch                     1                   4cf660e3241dc       ingress-nginx-admission-patch-qkv45
	852a4d200668c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   5 minutes ago       Exited              create                    0                   be1f6065f7ad6       ingress-nginx-admission-create-mkgjx
	f3a4ca79ab8fe       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server            0                   648e2f793ade0       metrics-server-8988944d9-ssk4x
	012e15f9a1f7c       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             5 minutes ago       Running             coredns                   0                   2b67237d2dc49       coredns-6f6b679f8f-rbz4z
	63d1a9f6ba13e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   61293d44ba42f       storage-provisioner
	3dd59dbbe567f       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                           6 minutes ago       Running             kindnet-cni               0                   d9a660edd4ffe       kindnet-ss96t
	2f1c05f8b2d29       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                             6 minutes ago       Running             kube-proxy                0                   6c2ddff24a6f6       kube-proxy-gk9xc
	16603acf52d48       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                             6 minutes ago       Running             kube-apiserver            0                   dec70b8738945       kube-apiserver-addons-035693
	37ec5b1fb253c       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                             6 minutes ago       Running             kube-scheduler            0                   fcd75fe526843       kube-scheduler-addons-035693
	561d83fad4550       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             6 minutes ago       Running             etcd                      0                   6bda942b786e6       etcd-addons-035693
	8087f1df94210       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                             6 minutes ago       Running             kube-controller-manager   0                   f4447781e9f34       kube-controller-manager-addons-035693
	
	
	==> coredns [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52] <==
	[INFO] 10.244.0.2:45338 - 44082 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001699954s
	[INFO] 10.244.0.2:41294 - 38935 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081042s
	[INFO] 10.244.0.2:41294 - 38698 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000193968s
	[INFO] 10.244.0.2:52210 - 27882 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000110022s
	[INFO] 10.244.0.2:52210 - 1686 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000172487s
	[INFO] 10.244.0.2:60220 - 62726 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059413s
	[INFO] 10.244.0.2:60220 - 59652 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000036808s
	[INFO] 10.244.0.2:37554 - 59365 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048861s
	[INFO] 10.244.0.2:37554 - 61163 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003607s
	[INFO] 10.244.0.2:58464 - 869 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001352066s
	[INFO] 10.244.0.2:58464 - 6499 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001419536s
	[INFO] 10.244.0.2:35475 - 56940 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068406s
	[INFO] 10.244.0.2:35475 - 65362 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114773s
	[INFO] 10.244.0.20:58227 - 5366 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000172511s
	[INFO] 10.244.0.20:41401 - 8599 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000069079s
	[INFO] 10.244.0.20:43759 - 13969 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125784s
	[INFO] 10.244.0.20:52886 - 11569 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000098043s
	[INFO] 10.244.0.20:47274 - 34174 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000212118s
	[INFO] 10.244.0.20:53242 - 61048 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097928s
	[INFO] 10.244.0.20:33077 - 62665 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003038531s
	[INFO] 10.244.0.20:55133 - 20954 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003908852s
	[INFO] 10.244.0.20:43332 - 13158 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001371981s
	[INFO] 10.244.0.20:42221 - 58688 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001199461s
	[INFO] 10.244.0.22:48707 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000175974s
	[INFO] 10.244.0.22:42769 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122247s
	
	
	==> describe nodes <==
	Name:               addons-035693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-035693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=addons-035693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T17_49_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-035693
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:49:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-035693
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:56:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:54:35 +0000   Fri, 16 Aug 2024 17:49:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:54:35 +0000   Fri, 16 Aug 2024 17:49:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:54:35 +0000   Fri, 16 Aug 2024 17:49:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:54:35 +0000   Fri, 16 Aug 2024 17:50:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-035693
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc4014d276dd41d38ed343c7bfd38367
	  System UUID:                0d15ea8b-f4e9-4cd8-9086-0f5742e0dff3
	  Boot ID:                    42540284-5019-4b99-817b-c2e55433aff8
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  default                     hello-world-app-55bf9c44b4-tjfkk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 coredns-6f6b679f8f-rbz4z                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m44s
	  kube-system                 etcd-addons-035693                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m49s
	  kube-system                 kindnet-ss96t                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m45s
	  kube-system                 kube-apiserver-addons-035693               250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m49s
	  kube-system                 kube-controller-manager-addons-035693      200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m49s
	  kube-system                 kube-proxy-gk9xc                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 kube-scheduler-addons-035693               100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m49s
	  kube-system                 metrics-server-8988944d9-ssk4x             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m38s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  local-path-storage          local-path-provisioner-86d989889c-q9262    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m36s                  kube-proxy       
	  Normal   Starting                 6m56s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m56s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m56s (x8 over 6m56s)  kubelet          Node addons-035693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m56s (x8 over 6m56s)  kubelet          Node addons-035693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m56s (x7 over 6m56s)  kubelet          Node addons-035693 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m49s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m49s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m49s (x2 over 6m49s)  kubelet          Node addons-035693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m49s (x2 over 6m49s)  kubelet          Node addons-035693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m49s (x2 over 6m49s)  kubelet          Node addons-035693 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m45s                  node-controller  Node addons-035693 event: Registered Node addons-035693 in Controller
	  Normal   NodeReady                5m57s                  kubelet          Node addons-035693 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug16 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.013703] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.456008] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.059863] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002591] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017003] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004100] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003508] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.725784] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.595200] kauditd_printk_skb: 36 callbacks suppressed
	[Aug16 16:48] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug16 16:51] hrtimer: interrupt took 1350672 ns
	[Aug16 17:21] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea] <==
	{"level":"info","ts":"2024-08-16T17:49:51.896129Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T17:49:51.896361Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T17:49:52.148611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-16T17:49:52.148754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-16T17:49:52.148807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-16T17:49:52.148861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-16T17:49:52.148897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-16T17:49:52.148937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-16T17:49:52.148975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-16T17:49:52.154311Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-035693 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T17:49:52.154740Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:49:52.154781Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:49:52.157349Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:49:52.160671Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T17:49:52.160776Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T17:49:52.160879Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:49:52.160984Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:49:52.161042Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:49:52.154809Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:49:52.161885Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:49:52.162838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T17:49:52.165142Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-16T17:50:05.420955Z","caller":"traceutil/trace.go:171","msg":"trace[427995030] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"213.342128ms","start":"2024-08-16T17:50:05.207507Z","end":"2024-08-16T17:50:05.420850Z","steps":["trace[427995030] 'process raft request'  (duration: 207.054353ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:50:06.204366Z","caller":"traceutil/trace.go:171","msg":"trace[1409747860] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"272.602168ms","start":"2024-08-16T17:50:05.931749Z","end":"2024-08-16T17:50:06.204351Z","steps":["trace[1409747860] 'process raft request'  (duration: 272.477033ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:50:07.416840Z","caller":"traceutil/trace.go:171","msg":"trace[2140881632] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"117.650046ms","start":"2024-08-16T17:50:07.299169Z","end":"2024-08-16T17:50:07.416820Z","steps":["trace[2140881632] 'process raft request'  (duration: 55.75318ms)","trace[2140881632] 'compare'  (duration: 51.745647ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:56:47 up  1:39,  0 users,  load average: 1.13, 1.18, 1.91
	Linux addons-035693 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac] <==
	I0816 17:55:29.934116       1 main.go:299] handling current node
	W0816 17:55:36.029434       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:55:36.029475       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0816 17:55:39.933934       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:55:39.933968       1 main.go:299] handling current node
	W0816 17:55:48.433873       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 17:55:48.433909       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0816 17:55:49.933485       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:55:49.933520       1 main.go:299] handling current node
	I0816 17:55:59.933877       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:55:59.933908       1 main.go:299] handling current node
	W0816 17:56:04.180409       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:56:04.180465       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0816 17:56:09.933029       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:56:09.933059       1 main.go:299] handling current node
	I0816 17:56:19.933428       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:56:19.933464       1 main.go:299] handling current node
	W0816 17:56:25.336706       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:56:25.336747       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0816 17:56:29.933708       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:56:29.933746       1 main.go:299] handling current node
	W0816 17:56:37.492355       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 17:56:37.492494       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0816 17:56:39.933991       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:56:39.934027       1 main.go:299] handling current node
	
	
	==> kube-apiserver [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2] <==
	E0816 17:52:10.913782       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.107.224:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.107.224:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.107.224:443: connect: connection refused" logger="UnhandledError"
	I0816 17:52:11.021316       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0816 17:52:54.195348       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43404: use of closed network connection
	E0816 17:52:54.355261       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43434: use of closed network connection
	I0816 17:53:36.265906       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0816 17:53:46.288399       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.79.244"}
	I0816 17:53:57.813291       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 17:53:57.813342       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 17:53:57.855480       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 17:53:57.855536       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 17:53:57.941515       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 17:53:57.941727       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 17:53:58.013443       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 17:53:58.013509       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 17:53:58.036329       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 17:53:58.036697       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0816 17:53:59.014149       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0816 17:53:59.037371       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0816 17:53:59.166301       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0816 17:54:09.974518       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0816 17:54:11.002766       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0816 17:54:15.639164       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0816 17:54:15.948556       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.63.102"}
	I0816 17:56:36.879946       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.165.101"}
	E0816 17:56:39.359359       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e] <==
	W0816 17:55:28.363573       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:55:28.363617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:55:41.965868       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:55:41.965912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:55:51.113837       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:55:51.113884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:56:01.354671       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:56:01.354718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:56:17.209702       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:56:17.209754       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:56:17.417233       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:56:17.417288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 17:56:36.631444       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.064233ms"
	I0816 17:56:36.651452       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="19.357953ms"
	I0816 17:56:36.671613       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="20.036095ms"
	I0816 17:56:36.671773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="51.979µs"
	I0816 17:56:38.604236       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.520923ms"
	I0816 17:56:38.607485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="53.193µs"
	I0816 17:56:39.285461       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0816 17:56:39.288222       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="8.156µs"
	I0816 17:56:39.291138       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0816 17:56:44.447997       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:56:44.448045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:56:46.033384       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:56:46.033534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3] <==
	I0816 17:50:09.285955       1 server_linux.go:66] "Using iptables proxy"
	I0816 17:50:10.422616       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0816 17:50:10.422687       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:50:10.765993       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0816 17:50:10.766078       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:50:10.778956       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:50:10.789207       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:50:10.789332       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:50:10.793126       1 config.go:197] "Starting service config controller"
	I0816 17:50:10.793247       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:50:10.793327       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:50:10.793370       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:50:10.793907       1 config.go:326] "Starting node config controller"
	I0816 17:50:10.793974       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:50:10.894446       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:50:10.894509       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:50:10.894536       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db] <==
	W0816 17:49:56.006825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:49:56.017124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.007107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 17:49:56.017216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.007158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:49:56.017335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.007222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:49:56.017427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.017697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:49:56.017761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.017874       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:49:56.017921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.018037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 17:49:56.018082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.018190       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:49:56.018254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.893453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 17:49:56.893499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.950082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:49:56.950126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.981236       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:49:56.981286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:57.037808       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:49:57.037939       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:49:58.988972       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 17:56:36 addons-035693 kubelet[1482]: I0816 17:56:36.784781    1482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfhrz\" (UniqueName: \"kubernetes.io/projected/19ab0c07-0879-4694-973f-1d99d7284ca4-kube-api-access-hfhrz\") pod \"hello-world-app-55bf9c44b4-tjfkk\" (UID: \"19ab0c07-0879-4694-973f-1d99d7284ca4\") " pod="default/hello-world-app-55bf9c44b4-tjfkk"
	Aug 16 17:56:37 addons-035693 kubelet[1482]: I0816 17:56:37.997457    1482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h5r6\" (UniqueName: \"kubernetes.io/projected/678e56e7-144e-4853-bb85-8157ca9cdd5d-kube-api-access-7h5r6\") pod \"678e56e7-144e-4853-bb85-8157ca9cdd5d\" (UID: \"678e56e7-144e-4853-bb85-8157ca9cdd5d\") "
	Aug 16 17:56:38 addons-035693 kubelet[1482]: I0816 17:56:37.999435    1482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678e56e7-144e-4853-bb85-8157ca9cdd5d-kube-api-access-7h5r6" (OuterVolumeSpecName: "kube-api-access-7h5r6") pod "678e56e7-144e-4853-bb85-8157ca9cdd5d" (UID: "678e56e7-144e-4853-bb85-8157ca9cdd5d"). InnerVolumeSpecName "kube-api-access-7h5r6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 17:56:38 addons-035693 kubelet[1482]: I0816 17:56:38.097886    1482 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7h5r6\" (UniqueName: \"kubernetes.io/projected/678e56e7-144e-4853-bb85-8157ca9cdd5d-kube-api-access-7h5r6\") on node \"addons-035693\" DevicePath \"\""
	Aug 16 17:56:38 addons-035693 kubelet[1482]: I0816 17:56:38.580944    1482 scope.go:117] "RemoveContainer" containerID="06c72a4a91fcb16caaac101aa8bf863c68925a07c6f9fc8a918044dc3d86cc07"
	Aug 16 17:56:38 addons-035693 kubelet[1482]: I0816 17:56:38.590587    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-tjfkk" podStartSLOduration=1.554956626 podStartE2EDuration="2.590562518s" podCreationTimestamp="2024-08-16 17:56:36 +0000 UTC" firstStartedPulling="2024-08-16 17:56:37.022186695 +0000 UTC m=+398.528764669" lastFinishedPulling="2024-08-16 17:56:38.057792588 +0000 UTC m=+399.564370561" observedRunningTime="2024-08-16 17:56:38.589930783 +0000 UTC m=+400.096508757" watchObservedRunningTime="2024-08-16 17:56:38.590562518 +0000 UTC m=+400.097140492"
	Aug 16 17:56:38 addons-035693 kubelet[1482]: I0816 17:56:38.605996    1482 scope.go:117] "RemoveContainer" containerID="06c72a4a91fcb16caaac101aa8bf863c68925a07c6f9fc8a918044dc3d86cc07"
	Aug 16 17:56:38 addons-035693 kubelet[1482]: E0816 17:56:38.606593    1482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c72a4a91fcb16caaac101aa8bf863c68925a07c6f9fc8a918044dc3d86cc07\": container with ID starting with 06c72a4a91fcb16caaac101aa8bf863c68925a07c6f9fc8a918044dc3d86cc07 not found: ID does not exist" containerID="06c72a4a91fcb16caaac101aa8bf863c68925a07c6f9fc8a918044dc3d86cc07"
	Aug 16 17:56:38 addons-035693 kubelet[1482]: I0816 17:56:38.606631    1482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c72a4a91fcb16caaac101aa8bf863c68925a07c6f9fc8a918044dc3d86cc07"} err="failed to get container status \"06c72a4a91fcb16caaac101aa8bf863c68925a07c6f9fc8a918044dc3d86cc07\": rpc error: code = NotFound desc = could not find container \"06c72a4a91fcb16caaac101aa8bf863c68925a07c6f9fc8a918044dc3d86cc07\": container with ID starting with 06c72a4a91fcb16caaac101aa8bf863c68925a07c6f9fc8a918044dc3d86cc07 not found: ID does not exist"
	Aug 16 17:56:38 addons-035693 kubelet[1482]: I0816 17:56:38.692654    1482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678e56e7-144e-4853-bb85-8157ca9cdd5d" path="/var/lib/kubelet/pods/678e56e7-144e-4853-bb85-8157ca9cdd5d/volumes"
	Aug 16 17:56:38 addons-035693 kubelet[1482]: E0816 17:56:38.833614    1482 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830998833327385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:56:38 addons-035693 kubelet[1482]: E0816 17:56:38.833654    1482 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830998833327385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:56:40 addons-035693 kubelet[1482]: I0816 17:56:40.691866    1482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bfff3e6-87c2-4fa5-9990-a7a38e929c5e" path="/var/lib/kubelet/pods/1bfff3e6-87c2-4fa5-9990-a7a38e929c5e/volumes"
	Aug 16 17:56:40 addons-035693 kubelet[1482]: I0816 17:56:40.692266    1482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7208706a-08fa-460c-9f5a-ee4733c9d5e1" path="/var/lib/kubelet/pods/7208706a-08fa-460c-9f5a-ee4733c9d5e1/volumes"
	Aug 16 17:56:42 addons-035693 kubelet[1482]: I0816 17:56:42.590876    1482 scope.go:117] "RemoveContainer" containerID="085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5"
	Aug 16 17:56:42 addons-035693 kubelet[1482]: I0816 17:56:42.608800    1482 scope.go:117] "RemoveContainer" containerID="085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5"
	Aug 16 17:56:42 addons-035693 kubelet[1482]: E0816 17:56:42.609209    1482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5\": container with ID starting with 085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5 not found: ID does not exist" containerID="085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5"
	Aug 16 17:56:42 addons-035693 kubelet[1482]: I0816 17:56:42.609250    1482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5"} err="failed to get container status \"085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5\": rpc error: code = NotFound desc = could not find container \"085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5\": container with ID starting with 085c1bb71ef37abe2844f2172a8f841d11c876086d62343c4eeae9edb52fcea5 not found: ID does not exist"
	Aug 16 17:56:42 addons-035693 kubelet[1482]: I0816 17:56:42.638608    1482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f95f446b-c51a-47e5-9fc3-68ca0527411b-webhook-cert\") pod \"f95f446b-c51a-47e5-9fc3-68ca0527411b\" (UID: \"f95f446b-c51a-47e5-9fc3-68ca0527411b\") "
	Aug 16 17:56:42 addons-035693 kubelet[1482]: I0816 17:56:42.638671    1482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vv52\" (UniqueName: \"kubernetes.io/projected/f95f446b-c51a-47e5-9fc3-68ca0527411b-kube-api-access-7vv52\") pod \"f95f446b-c51a-47e5-9fc3-68ca0527411b\" (UID: \"f95f446b-c51a-47e5-9fc3-68ca0527411b\") "
	Aug 16 17:56:42 addons-035693 kubelet[1482]: I0816 17:56:42.640654    1482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f95f446b-c51a-47e5-9fc3-68ca0527411b-kube-api-access-7vv52" (OuterVolumeSpecName: "kube-api-access-7vv52") pod "f95f446b-c51a-47e5-9fc3-68ca0527411b" (UID: "f95f446b-c51a-47e5-9fc3-68ca0527411b"). InnerVolumeSpecName "kube-api-access-7vv52". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 17:56:42 addons-035693 kubelet[1482]: I0816 17:56:42.641561    1482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95f446b-c51a-47e5-9fc3-68ca0527411b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f95f446b-c51a-47e5-9fc3-68ca0527411b" (UID: "f95f446b-c51a-47e5-9fc3-68ca0527411b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 16 17:56:42 addons-035693 kubelet[1482]: I0816 17:56:42.691638    1482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f95f446b-c51a-47e5-9fc3-68ca0527411b" path="/var/lib/kubelet/pods/f95f446b-c51a-47e5-9fc3-68ca0527411b/volumes"
	Aug 16 17:56:42 addons-035693 kubelet[1482]: I0816 17:56:42.739422    1482 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f95f446b-c51a-47e5-9fc3-68ca0527411b-webhook-cert\") on node \"addons-035693\" DevicePath \"\""
	Aug 16 17:56:42 addons-035693 kubelet[1482]: I0816 17:56:42.739464    1482 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7vv52\" (UniqueName: \"kubernetes.io/projected/f95f446b-c51a-47e5-9fc3-68ca0527411b-kube-api-access-7vv52\") on node \"addons-035693\" DevicePath \"\""
	
	
	==> storage-provisioner [63d1a9f6ba13e8a852d2e9c0f42a6ffe70f60f0cdd413245f71726d5791c7fff] <==
	I0816 17:50:51.313414       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 17:50:51.328685       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 17:50:51.328735       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 17:50:51.336716       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 17:50:51.339869       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-035693_dbfaaf44-207e-4f5f-839a-e775aa43bdeb!
	I0816 17:50:51.337244       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2627d9f4-4c1e-47dc-8d77-6117f68f8057", APIVersion:"v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-035693_dbfaaf44-207e-4f5f-839a-e775aa43bdeb became leader
	I0816 17:50:51.440116       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-035693_dbfaaf44-207e-4f5f-839a-e775aa43bdeb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-035693 -n addons-035693
helpers_test.go:261: (dbg) Run:  kubectl --context addons-035693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.49s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (325.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.054962ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-ssk4x" [0bdf104e-0061-4330-aaa3-3ed64ee249e7] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003486896s
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (109.374057ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 4m0.535833032s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (92.697967ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 4m3.980195364s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (139.487446ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 4m6.547820622s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (117.322687ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 4m13.068194865s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (89.616272ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 4m21.942671889s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (118.216547ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 4m39.20519826s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (91.302646ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 5m10.49226256s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (100.214442ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 5m37.916664143s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (92.134192ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 6m13.810715215s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (95.59501ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 7m28.469960808s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (83.334979ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 8m11.981262556s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-035693 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-035693 top pods -n kube-system: exit status 1 (85.921994ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rbz4z, age: 9m17.999305458s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-035693
helpers_test.go:235: (dbg) docker inspect addons-035693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8",
	        "Created": "2024-08-16T17:49:37.189676917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285535,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-16T17:49:37.337084512Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8/hostname",
	        "HostsPath": "/var/lib/docker/containers/476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8/hosts",
	        "LogPath": "/var/lib/docker/containers/476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8/476c3ad0d88c065a4f85fa189ca7b29f9ec649f33c7f7825b15962224faa28a8-json.log",
	        "Name": "/addons-035693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-035693:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-035693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/947dbc3b107a5acd33a6b006e02949c9b2452bfe48994eeae84559323de14ce1-init/diff:/var/lib/docker/overlay2/70037d522e00dd0a89a9843a2c58153706242dc665eddca7b5915c2487a67ddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/947dbc3b107a5acd33a6b006e02949c9b2452bfe48994eeae84559323de14ce1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/947dbc3b107a5acd33a6b006e02949c9b2452bfe48994eeae84559323de14ce1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/947dbc3b107a5acd33a6b006e02949c9b2452bfe48994eeae84559323de14ce1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-035693",
	                "Source": "/var/lib/docker/volumes/addons-035693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-035693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-035693",
	                "name.minikube.sigs.k8s.io": "addons-035693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "92e865572677ca1bbcd69c32a59062040b6c58e3627396f20143012c7bea7194",
	            "SandboxKey": "/var/run/docker/netns/92e865572677",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-035693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8b521fbf1af19d1dfad15c433f7e1bd8503f271e638f4b949c047a9c0b659da8",
	                    "EndpointID": "fdf53fe0696062dae63d98a57ebd77b53282244e6dbe57c6383a7ded0bc64d06",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-035693",
	                        "476c3ad0d88c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-035693 -n addons-035693
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-035693 logs -n 25: (1.424076004s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-240993 | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC |                     |
	|         | download-docker-240993                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-240993                                                                   | download-docker-240993 | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC | 16 Aug 24 17:49 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-205704   | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC |                     |
	|         | binary-mirror-205704                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41837                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-205704                                                                     | binary-mirror-205704   | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC | 16 Aug 24 17:49 UTC |
	| addons  | disable dashboard -p                                                                        | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC |                     |
	|         | addons-035693                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC |                     |
	|         | addons-035693                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-035693 --wait=true                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC | 16 Aug 24 17:52 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:52 UTC | 16 Aug 24 17:53 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-035693 ip                                                                            | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | -p addons-035693                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-035693 ssh cat                                                                       | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | /opt/local-path-provisioner/pvc-8f665f0d-7f70-4b2f-b5f6-7d515479e3bb_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | addons-035693                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | -p addons-035693                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-035693 addons                                                                        | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-035693 addons                                                                        | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:53 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:53 UTC | 16 Aug 24 17:54 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:54 UTC | 16 Aug 24 17:54 UTC |
	|         | addons-035693                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-035693 ssh curl -s                                                                   | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:54 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-035693 ip                                                                            | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:56 UTC | 16 Aug 24 17:56 UTC |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:56 UTC | 16 Aug 24 17:56 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-035693 addons disable                                                                | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:56 UTC | 16 Aug 24 17:56 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-035693 addons                                                                        | addons-035693          | jenkins | v1.33.1 | 16 Aug 24 17:59 UTC | 16 Aug 24 17:59 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:49:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:49:12.491073  285045 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:49:12.491277  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:49:12.491304  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:49:12.491325  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:49:12.491607  285045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
	I0816 17:49:12.492129  285045 out.go:352] Setting JSON to false
	I0816 17:49:12.493053  285045 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5501,"bootTime":1723825052,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 17:49:12.493162  285045 start.go:139] virtualization:  
	I0816 17:49:12.495600  285045 out.go:177] * [addons-035693] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 17:49:12.497898  285045 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:49:12.498025  285045 notify.go:220] Checking for updates...
	I0816 17:49:12.501298  285045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:49:12.503133  285045 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	I0816 17:49:12.504749  285045 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	I0816 17:49:12.506579  285045 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 17:49:12.508261  285045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:49:12.510104  285045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:49:12.533541  285045 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 17:49:12.533661  285045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:49:12.597131  285045 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 17:49:12.587883065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:49:12.597250  285045 docker.go:307] overlay module found
	I0816 17:49:12.600472  285045 out.go:177] * Using the docker driver based on user configuration
	I0816 17:49:12.602096  285045 start.go:297] selected driver: docker
	I0816 17:49:12.602113  285045 start.go:901] validating driver "docker" against <nil>
	I0816 17:49:12.602127  285045 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:49:12.602776  285045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:49:12.661686  285045 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 17:49:12.650713657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:49:12.661883  285045 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 17:49:12.662127  285045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:49:12.664148  285045 out.go:177] * Using Docker driver with root privileges
	I0816 17:49:12.666027  285045 cni.go:84] Creating CNI manager for ""
	I0816 17:49:12.666058  285045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 17:49:12.666076  285045 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 17:49:12.666204  285045 start.go:340] cluster config:
	{Name:addons-035693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-035693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:49:12.668308  285045 out.go:177] * Starting "addons-035693" primary control-plane node in "addons-035693" cluster
	I0816 17:49:12.669958  285045 cache.go:121] Beginning downloading kic base image for docker with crio
	I0816 17:49:12.671810  285045 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0816 17:49:12.673468  285045 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:49:12.673519  285045 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-278896/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0816 17:49:12.673536  285045 cache.go:56] Caching tarball of preloaded images
	I0816 17:49:12.673567  285045 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0816 17:49:12.673623  285045 preload.go:172] Found /home/jenkins/minikube-integration/19461-278896/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0816 17:49:12.673633  285045 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:49:12.673997  285045 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/config.json ...
	I0816 17:49:12.674030  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/config.json: {Name:mkbb27cbeedd58dd6672b815036d375a27c5cf00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:12.688632  285045 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0816 17:49:12.688756  285045 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0816 17:49:12.688783  285045 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0816 17:49:12.688789  285045 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0816 17:49:12.688796  285045 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0816 17:49:12.688807  285045 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0816 17:49:29.783534  285045 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0816 17:49:29.783577  285045 cache.go:194] Successfully downloaded all kic artifacts
	I0816 17:49:29.783616  285045 start.go:360] acquireMachinesLock for addons-035693: {Name:mk10c159bb3bc4a2c181acf77d64f0fe4d1d4dec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:49:29.783753  285045 start.go:364] duration metric: took 110.227µs to acquireMachinesLock for "addons-035693"
	I0816 17:49:29.783790  285045 start.go:93] Provisioning new machine with config: &{Name:addons-035693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-035693 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:49:29.783869  285045 start.go:125] createHost starting for "" (driver="docker")
	I0816 17:49:29.786258  285045 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0816 17:49:29.786514  285045 start.go:159] libmachine.API.Create for "addons-035693" (driver="docker")
	I0816 17:49:29.786556  285045 client.go:168] LocalClient.Create starting
	I0816 17:49:29.786673  285045 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem
	I0816 17:49:30.086025  285045 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/cert.pem
	I0816 17:49:30.701732  285045 cli_runner.go:164] Run: docker network inspect addons-035693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 17:49:30.717106  285045 cli_runner.go:211] docker network inspect addons-035693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 17:49:30.717189  285045 network_create.go:284] running [docker network inspect addons-035693] to gather additional debugging logs...
	I0816 17:49:30.717210  285045 cli_runner.go:164] Run: docker network inspect addons-035693
	W0816 17:49:30.732203  285045 cli_runner.go:211] docker network inspect addons-035693 returned with exit code 1
	I0816 17:49:30.732236  285045 network_create.go:287] error running [docker network inspect addons-035693]: docker network inspect addons-035693: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-035693 not found
	I0816 17:49:30.732250  285045 network_create.go:289] output of [docker network inspect addons-035693]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-035693 not found
	
	** /stderr **
	I0816 17:49:30.732353  285045 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 17:49:30.747845  285045 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400177be90}
	I0816 17:49:30.747888  285045 network_create.go:124] attempt to create docker network addons-035693 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0816 17:49:30.747955  285045 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-035693 addons-035693
	I0816 17:49:30.815066  285045 network_create.go:108] docker network addons-035693 192.168.49.0/24 created
	I0816 17:49:30.815110  285045 kic.go:121] calculated static IP "192.168.49.2" for the "addons-035693" container
	I0816 17:49:30.815186  285045 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0816 17:49:30.829817  285045 cli_runner.go:164] Run: docker volume create addons-035693 --label name.minikube.sigs.k8s.io=addons-035693 --label created_by.minikube.sigs.k8s.io=true
	I0816 17:49:30.846456  285045 oci.go:103] Successfully created a docker volume addons-035693
	I0816 17:49:30.846552  285045 cli_runner.go:164] Run: docker run --rm --name addons-035693-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-035693 --entrypoint /usr/bin/test -v addons-035693:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0816 17:49:32.942429  285045 cli_runner.go:217] Completed: docker run --rm --name addons-035693-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-035693 --entrypoint /usr/bin/test -v addons-035693:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (2.095830883s)
	I0816 17:49:32.942463  285045 oci.go:107] Successfully prepared a docker volume addons-035693
	I0816 17:49:32.942494  285045 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:49:32.942515  285045 kic.go:194] Starting extracting preloaded images to volume ...
	I0816 17:49:32.942594  285045 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19461-278896/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-035693:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 17:49:37.120912  285045 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19461-278896/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-035693:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.178279408s)
	I0816 17:49:37.120958  285045 kic.go:203] duration metric: took 4.178427772s to extract preloaded images to volume ...
	W0816 17:49:37.121092  285045 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0816 17:49:37.121201  285045 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 17:49:37.175152  285045 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-035693 --name addons-035693 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-035693 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-035693 --network addons-035693 --ip 192.168.49.2 --volume addons-035693:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0816 17:49:37.486971  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Running}}
	I0816 17:49:37.507017  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:49:37.529165  285045 cli_runner.go:164] Run: docker exec addons-035693 stat /var/lib/dpkg/alternatives/iptables
	I0816 17:49:37.597985  285045 oci.go:144] the created container "addons-035693" has a running status.
	I0816 17:49:37.598017  285045 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa...
	I0816 17:49:38.349462  285045 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 17:49:38.383555  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:49:38.402620  285045 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 17:49:38.402640  285045 kic_runner.go:114] Args: [docker exec --privileged addons-035693 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 17:49:38.469526  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:49:38.486282  285045 machine.go:93] provisionDockerMachine start ...
	I0816 17:49:38.486371  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:38.504535  285045 main.go:141] libmachine: Using SSH client type: native
	I0816 17:49:38.504839  285045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0816 17:49:38.504857  285045 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 17:49:38.647906  285045 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-035693
	
	I0816 17:49:38.647972  285045 ubuntu.go:169] provisioning hostname "addons-035693"
	I0816 17:49:38.648045  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:38.666123  285045 main.go:141] libmachine: Using SSH client type: native
	I0816 17:49:38.666382  285045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0816 17:49:38.666401  285045 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-035693 && echo "addons-035693" | sudo tee /etc/hostname
	I0816 17:49:38.808187  285045 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-035693
	
	I0816 17:49:38.808291  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:38.825627  285045 main.go:141] libmachine: Using SSH client type: native
	I0816 17:49:38.825867  285045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0816 17:49:38.825890  285045 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-035693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-035693/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-035693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:49:38.960899  285045 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:49:38.960928  285045 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19461-278896/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-278896/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-278896/.minikube}
	I0816 17:49:38.960957  285045 ubuntu.go:177] setting up certificates
	I0816 17:49:38.960966  285045 provision.go:84] configureAuth start
	I0816 17:49:38.961034  285045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-035693
	I0816 17:49:38.977611  285045 provision.go:143] copyHostCerts
	I0816 17:49:38.977695  285045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-278896/.minikube/ca.pem (1082 bytes)
	I0816 17:49:38.977826  285045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-278896/.minikube/cert.pem (1123 bytes)
	I0816 17:49:38.977887  285045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-278896/.minikube/key.pem (1679 bytes)
	I0816 17:49:38.977939  285045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-278896/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca-key.pem org=jenkins.addons-035693 san=[127.0.0.1 192.168.49.2 addons-035693 localhost minikube]
	I0816 17:49:39.140644  285045 provision.go:177] copyRemoteCerts
	I0816 17:49:39.140722  285045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:49:39.140769  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.157356  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:49:39.249753  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:49:39.275376  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:49:39.299604  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 17:49:39.323771  285045 provision.go:87] duration metric: took 362.782383ms to configureAuth
	I0816 17:49:39.323802  285045 ubuntu.go:193] setting minikube options for container-runtime
	I0816 17:49:39.323992  285045 config.go:182] Loaded profile config "addons-035693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:49:39.324105  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.340782  285045 main.go:141] libmachine: Using SSH client type: native
	I0816 17:49:39.341020  285045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0816 17:49:39.341039  285045 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:49:39.576337  285045 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:49:39.576361  285045 machine.go:96] duration metric: took 1.090058129s to provisionDockerMachine
	I0816 17:49:39.576372  285045 client.go:171] duration metric: took 9.789804776s to LocalClient.Create
	I0816 17:49:39.576387  285045 start.go:167] duration metric: took 9.789873149s to libmachine.API.Create "addons-035693"
	I0816 17:49:39.576404  285045 start.go:293] postStartSetup for "addons-035693" (driver="docker")
	I0816 17:49:39.576414  285045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:49:39.576477  285045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:49:39.576517  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.595717  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:49:39.689843  285045 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:49:39.692978  285045 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 17:49:39.693014  285045 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 17:49:39.693026  285045 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 17:49:39.693033  285045 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0816 17:49:39.693044  285045 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-278896/.minikube/addons for local assets ...
	I0816 17:49:39.693118  285045 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-278896/.minikube/files for local assets ...
	I0816 17:49:39.693149  285045 start.go:296] duration metric: took 116.739657ms for postStartSetup
	I0816 17:49:39.693484  285045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-035693
	I0816 17:49:39.709424  285045 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/config.json ...
	I0816 17:49:39.709708  285045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:49:39.709767  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.726164  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:49:39.817366  285045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0816 17:49:39.821749  285045 start.go:128] duration metric: took 10.037862917s to createHost
	I0816 17:49:39.821775  285045 start.go:83] releasing machines lock for "addons-035693", held for 10.038008836s
	I0816 17:49:39.821855  285045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-035693
	I0816 17:49:39.837362  285045 ssh_runner.go:195] Run: cat /version.json
	I0816 17:49:39.837413  285045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:49:39.837426  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.837488  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:49:39.860745  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:49:39.863482  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:49:40.118456  285045 ssh_runner.go:195] Run: systemctl --version
	I0816 17:49:40.123336  285045 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:49:40.268896  285045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 17:49:40.273440  285045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:49:40.295465  285045 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0816 17:49:40.295550  285045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:49:40.333676  285045 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0816 17:49:40.333703  285045 start.go:495] detecting cgroup driver to use...
	I0816 17:49:40.333739  285045 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0816 17:49:40.333792  285045 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:49:40.352299  285045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:49:40.365554  285045 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:49:40.365707  285045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:49:40.382014  285045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:49:40.397221  285045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:49:40.480365  285045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:49:40.585807  285045 docker.go:233] disabling docker service ...
	I0816 17:49:40.585884  285045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:49:40.606451  285045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:49:40.618549  285045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:49:40.715159  285045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:49:40.812318  285045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:49:40.824829  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:49:40.841260  285045 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:49:40.841330  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.851380  285045 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:49:40.851504  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.861311  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.871201  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.881521  285045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:49:40.891015  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.900769  285045 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.916899  285045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:49:40.926850  285045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:49:40.935824  285045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:49:40.944250  285045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:49:41.022918  285045 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:49:41.135330  285045 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:49:41.135412  285045 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:49:41.138964  285045 start.go:563] Will wait 60s for crictl version
	I0816 17:49:41.139074  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:49:41.142934  285045 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:49:41.181669  285045 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0816 17:49:41.181795  285045 ssh_runner.go:195] Run: crio --version
	I0816 17:49:41.228599  285045 ssh_runner.go:195] Run: crio --version
	I0816 17:49:41.269844  285045 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0816 17:49:41.272021  285045 cli_runner.go:164] Run: docker network inspect addons-035693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 17:49:41.289933  285045 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0816 17:49:41.293620  285045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:49:41.304816  285045 kubeadm.go:883] updating cluster {Name:addons-035693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-035693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 17:49:41.304943  285045 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:49:41.305024  285045 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:49:41.383622  285045 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:49:41.383646  285045 crio.go:433] Images already preloaded, skipping extraction
	I0816 17:49:41.383700  285045 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:49:41.424870  285045 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:49:41.424931  285045 cache_images.go:84] Images are preloaded, skipping loading
	I0816 17:49:41.424941  285045 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0816 17:49:41.425043  285045 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-035693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-035693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:49:41.425123  285045 ssh_runner.go:195] Run: crio config
	I0816 17:49:41.477162  285045 cni.go:84] Creating CNI manager for ""
	I0816 17:49:41.477188  285045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 17:49:41.477205  285045 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 17:49:41.477229  285045 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-035693 NodeName:addons-035693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 17:49:41.477382  285045 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-035693"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 17:49:41.477468  285045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:49:41.486473  285045 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 17:49:41.486546  285045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 17:49:41.495546  285045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0816 17:49:41.513750  285045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:49:41.531574  285045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0816 17:49:41.550204  285045 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 17:49:41.553797  285045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:49:41.564907  285045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:49:41.657386  285045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:49:41.671102  285045 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693 for IP: 192.168.49.2
	I0816 17:49:41.671134  285045 certs.go:194] generating shared ca certs ...
	I0816 17:49:41.671152  285045 certs.go:226] acquiring lock for ca certs: {Name:mk5387cb6cbb5a544c3c082f10b573950a035d73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:41.671320  285045 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-278896/.minikube/ca.key
	I0816 17:49:41.980959  285045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-278896/.minikube/ca.crt ...
	I0816 17:49:41.980993  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/ca.crt: {Name:mk5214fbcb931ec9c573571ab7e2e949722d8301 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:41.981618  285045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-278896/.minikube/ca.key ...
	I0816 17:49:41.981634  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/ca.key: {Name:mk4aa1aedafc855a8e7dc18ed3510793dd3a613d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:41.981726  285045 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.key
	I0816 17:49:42.323782  285045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.crt ...
	I0816 17:49:42.323817  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.crt: {Name:mk75128e47dd037679fc405e656c58508593e4dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:42.324657  285045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.key ...
	I0816 17:49:42.324689  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.key: {Name:mk3f6ccdc85d2dc2c1b5d768e569cd2508b4985f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:42.324822  285045 certs.go:256] generating profile certs ...
	I0816 17:49:42.324901  285045 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.key
	I0816 17:49:42.325000  285045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt with IP's: []
	I0816 17:49:42.977182  285045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt ...
	I0816 17:49:42.977216  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: {Name:mk1bda83cfdab2272cfc81be06128d85dee1c240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:42.978073  285045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.key ...
	I0816 17:49:42.978099  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.key: {Name:mk987eaaed9a9a3f7e4e5bdd35ab8ad4be3481b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:42.978551  285045 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key.302e7c68
	I0816 17:49:42.978575  285045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt.302e7c68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0816 17:49:43.170392  285045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt.302e7c68 ...
	I0816 17:49:43.170422  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt.302e7c68: {Name:mk3c2e80871adf23f9ae2045573df0a3468378e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:43.170608  285045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key.302e7c68 ...
	I0816 17:49:43.170625  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key.302e7c68: {Name:mk55b7900f4990ee92b1785096a1a3a93adad724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:43.170717  285045 certs.go:381] copying /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt.302e7c68 -> /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt
	I0816 17:49:43.170796  285045 certs.go:385] copying /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key.302e7c68 -> /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key
	I0816 17:49:43.170854  285045 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.key
	I0816 17:49:43.170876  285045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.crt with IP's: []
	I0816 17:49:43.452907  285045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.crt ...
	I0816 17:49:43.452943  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.crt: {Name:mkfce6ad7090274346daaec3a6dbf34f56a24604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:43.453130  285045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.key ...
	I0816 17:49:43.453144  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.key: {Name:mka0fbb455b5925750b76784800dbe67f2c8762f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:49:43.453346  285045 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 17:49:43.453389  285045 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:49:43.453424  285045 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:49:43.453454  285045 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-278896/.minikube/certs/key.pem (1679 bytes)
	I0816 17:49:43.454131  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:49:43.479895  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 17:49:43.503746  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:49:43.527799  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:49:43.551976  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0816 17:49:43.576536  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 17:49:43.601177  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:49:43.625723  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 17:49:43.649499  285045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-278896/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:49:43.674001  285045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 17:49:43.692435  285045 ssh_runner.go:195] Run: openssl version
	I0816 17:49:43.697904  285045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:49:43.707617  285045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:49:43.711110  285045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 17:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:49:43.711182  285045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:49:43.718133  285045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:49:43.727616  285045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:49:43.731067  285045 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 17:49:43.731121  285045 kubeadm.go:392] StartCluster: {Name:addons-035693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-035693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:49:43.731202  285045 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 17:49:43.731266  285045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 17:49:43.770083  285045 cri.go:89] found id: ""
	I0816 17:49:43.770178  285045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 17:49:43.779109  285045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 17:49:43.788134  285045 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0816 17:49:43.788223  285045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 17:49:43.797002  285045 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 17:49:43.797022  285045 kubeadm.go:157] found existing configuration files:
	
	I0816 17:49:43.797076  285045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 17:49:43.805922  285045 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 17:49:43.805991  285045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 17:49:43.814545  285045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 17:49:43.823138  285045 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 17:49:43.823203  285045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 17:49:43.831805  285045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 17:49:43.840896  285045 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 17:49:43.840990  285045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 17:49:43.849586  285045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 17:49:43.858947  285045 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 17:49:43.859043  285045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 17:49:43.867419  285045 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 17:49:43.916259  285045 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 17:49:43.916494  285045 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 17:49:43.961968  285045 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0816 17:49:43.962047  285045 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0816 17:49:43.962099  285045 kubeadm.go:310] OS: Linux
	I0816 17:49:43.962149  285045 kubeadm.go:310] CGROUPS_CPU: enabled
	I0816 17:49:43.962203  285045 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0816 17:49:43.962255  285045 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0816 17:49:43.962322  285045 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0816 17:49:43.962373  285045 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0816 17:49:43.962423  285045 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0816 17:49:43.962470  285045 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0816 17:49:43.962518  285045 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0816 17:49:43.962567  285045 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0816 17:49:44.031861  285045 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 17:49:44.031985  285045 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 17:49:44.032111  285045 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 17:49:44.039195  285045 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 17:49:44.043016  285045 out.go:235]   - Generating certificates and keys ...
	I0816 17:49:44.043239  285045 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 17:49:44.043346  285045 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 17:49:44.286736  285045 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 17:49:44.511056  285045 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 17:49:44.772081  285045 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 17:49:46.085635  285045 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 17:49:46.523458  285045 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 17:49:46.523713  285045 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-035693 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 17:49:47.056190  285045 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 17:49:47.056475  285045 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-035693 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 17:49:47.590822  285045 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 17:49:47.752715  285045 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 17:49:48.014187  285045 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 17:49:48.014264  285045 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 17:49:48.437238  285045 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 17:49:48.779088  285045 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 17:49:49.212979  285045 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 17:49:49.391114  285045 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 17:49:50.256074  285045 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 17:49:50.257834  285045 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 17:49:50.259990  285045 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 17:49:50.262242  285045 out.go:235]   - Booting up control plane ...
	I0816 17:49:50.262348  285045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 17:49:50.262426  285045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 17:49:50.263303  285045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 17:49:50.273483  285045 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 17:49:50.279178  285045 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 17:49:50.279510  285045 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 17:49:50.372209  285045 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 17:49:50.372332  285045 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 17:49:51.373800  285045 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001684122s
	I0816 17:49:51.373898  285045 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 17:49:57.877257  285045 kubeadm.go:310] [api-check] The API server is healthy after 6.501311028s
	I0816 17:49:57.895654  285045 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 17:49:57.912930  285045 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 17:49:57.938925  285045 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 17:49:57.939117  285045 kubeadm.go:310] [mark-control-plane] Marking the node addons-035693 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 17:49:57.950829  285045 kubeadm.go:310] [bootstrap-token] Using token: dfzgpf.u099ubqf8oq9r2ar
	I0816 17:49:57.952770  285045 out.go:235]   - Configuring RBAC rules ...
	I0816 17:49:57.952894  285045 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 17:49:57.958001  285045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 17:49:57.966571  285045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 17:49:57.971924  285045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 17:49:57.975967  285045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 17:49:57.979880  285045 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 17:49:58.281497  285045 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 17:49:58.720885  285045 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 17:49:59.281869  285045 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 17:49:59.282985  285045 kubeadm.go:310] 
	I0816 17:49:59.283070  285045 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 17:49:59.283083  285045 kubeadm.go:310] 
	I0816 17:49:59.283159  285045 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 17:49:59.283168  285045 kubeadm.go:310] 
	I0816 17:49:59.283193  285045 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 17:49:59.283253  285045 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 17:49:59.283305  285045 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 17:49:59.283314  285045 kubeadm.go:310] 
	I0816 17:49:59.283366  285045 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 17:49:59.283374  285045 kubeadm.go:310] 
	I0816 17:49:59.283420  285045 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 17:49:59.283429  285045 kubeadm.go:310] 
	I0816 17:49:59.283480  285045 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 17:49:59.283555  285045 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 17:49:59.283626  285045 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 17:49:59.283634  285045 kubeadm.go:310] 
	I0816 17:49:59.283716  285045 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 17:49:59.283800  285045 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 17:49:59.283809  285045 kubeadm.go:310] 
	I0816 17:49:59.283890  285045 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dfzgpf.u099ubqf8oq9r2ar \
	I0816 17:49:59.283992  285045 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:522d2e9084bbcb6112ba1fb935ecdfcda75cfb6d9f17126bcf73feb6609fe7d4 \
	I0816 17:49:59.284016  285045 kubeadm.go:310] 	--control-plane 
	I0816 17:49:59.284023  285045 kubeadm.go:310] 
	I0816 17:49:59.284105  285045 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 17:49:59.284113  285045 kubeadm.go:310] 
	I0816 17:49:59.284192  285045 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dfzgpf.u099ubqf8oq9r2ar \
	I0816 17:49:59.284294  285045 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:522d2e9084bbcb6112ba1fb935ecdfcda75cfb6d9f17126bcf73feb6609fe7d4 
	I0816 17:49:59.288651  285045 kubeadm.go:310] W0816 17:49:43.912416    1174 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 17:49:59.288952  285045 kubeadm.go:310] W0816 17:49:43.913746    1174 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 17:49:59.289162  285045 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0816 17:49:59.289269  285045 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 17:49:59.289291  285045 cni.go:84] Creating CNI manager for ""
	I0816 17:49:59.289300  285045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 17:49:59.291494  285045 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 17:49:59.293474  285045 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 17:49:59.297743  285045 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 17:49:59.297766  285045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 17:49:59.316991  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 17:49:59.621246  285045 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 17:49:59.621413  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:49:59.621419  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-035693 minikube.k8s.io/updated_at=2024_08_16T17_49_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=addons-035693 minikube.k8s.io/primary=true
	I0816 17:49:59.799087  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:49:59.799135  285045 ops.go:34] apiserver oom_adj: -16
	I0816 17:50:00.326410  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:00.800126  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:01.299698  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:01.799218  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:02.299219  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:02.799694  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:03.299981  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:03.799549  285045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:50:03.885997  285045 kubeadm.go:1113] duration metric: took 4.264647393s to wait for elevateKubeSystemPrivileges
	I0816 17:50:03.886042  285045 kubeadm.go:394] duration metric: took 20.154925999s to StartCluster
	I0816 17:50:03.886061  285045 settings.go:142] acquiring lock: {Name:mk45720424438a5d93f082d2cc69f502b3ed6f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:50:03.886975  285045 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-278896/kubeconfig
	I0816 17:50:03.887455  285045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-278896/kubeconfig: {Name:mk0b74dabbab2b27fb455b2cd76965b27d9abfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:50:03.888054  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 17:50:03.888094  285045 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:50:03.888478  285045 config.go:182] Loaded profile config "addons-035693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:50:03.888500  285045 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0816 17:50:03.888625  285045 addons.go:69] Setting ingress-dns=true in profile "addons-035693"
	I0816 17:50:03.888624  285045 addons.go:69] Setting yakd=true in profile "addons-035693"
	I0816 17:50:03.888673  285045 addons.go:234] Setting addon yakd=true in "addons-035693"
	I0816 17:50:03.888703  285045 addons.go:234] Setting addon ingress-dns=true in "addons-035693"
	I0816 17:50:03.888720  285045 addons.go:69] Setting metrics-server=true in profile "addons-035693"
	I0816 17:50:03.888738  285045 addons.go:234] Setting addon metrics-server=true in "addons-035693"
	I0816 17:50:03.888755  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.888807  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.889277  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.889286  285045 addons.go:69] Setting cloud-spanner=true in profile "addons-035693"
	I0816 17:50:03.889350  285045 addons.go:234] Setting addon cloud-spanner=true in "addons-035693"
	I0816 17:50:03.889399  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.889790  285045 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-035693"
	I0816 17:50:03.889820  285045 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-035693"
	I0816 17:50:03.889853  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.889870  285045 addons.go:69] Setting storage-provisioner=true in profile "addons-035693"
	I0816 17:50:03.889909  285045 addons.go:234] Setting addon storage-provisioner=true in "addons-035693"
	I0816 17:50:03.889929  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.890355  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.892642  285045 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-035693"
	I0816 17:50:03.892693  285045 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-035693"
	I0816 17:50:03.893020  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.893198  285045 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-035693"
	I0816 17:50:03.893280  285045 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-035693"
	I0816 17:50:03.893330  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.893733  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.889277  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.889854  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.909292  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.893846  285045 addons.go:69] Setting default-storageclass=true in profile "addons-035693"
	I0816 17:50:03.888712  285045 addons.go:69] Setting inspektor-gadget=true in profile "addons-035693"
	I0816 17:50:03.920735  285045 addons.go:234] Setting addon inspektor-gadget=true in "addons-035693"
	I0816 17:50:03.888705  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.889862  285045 addons.go:69] Setting registry=true in profile "addons-035693"
	I0816 17:50:03.920876  285045 addons.go:234] Setting addon registry=true in "addons-035693"
	I0816 17:50:03.893858  285045 addons.go:69] Setting gcp-auth=true in profile "addons-035693"
	I0816 17:50:03.920961  285045 mustload.go:65] Loading cluster: addons-035693
	I0816 17:50:03.893870  285045 addons.go:69] Setting ingress=true in profile "addons-035693"
	I0816 17:50:03.921029  285045 addons.go:234] Setting addon ingress=true in "addons-035693"
	I0816 17:50:03.921064  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.941079  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.893899  285045 out.go:177] * Verifying Kubernetes components...
	I0816 17:50:03.894736  285045 addons.go:69] Setting volcano=true in profile "addons-035693"
	I0816 17:50:03.894748  285045 addons.go:69] Setting volumesnapshots=true in profile "addons-035693"
	I0816 17:50:03.931122  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.933300  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.933323  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.933470  285045 config.go:182] Loaded profile config "addons-035693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:50:03.931064  285045 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-035693"
	I0816 17:50:03.954555  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.968625  285045 addons.go:234] Setting addon volcano=true in "addons-035693"
	I0816 17:50:03.968707  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.969273  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.972353  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.976497  285045 addons.go:234] Setting addon volumesnapshots=true in "addons-035693"
	I0816 17:50:03.976643  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:03.977195  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:03.981479  285045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:50:03.999113  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:04.047576  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:04.049898  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 17:50:04.052854  285045 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 17:50:04.058698  285045 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0816 17:50:04.060912  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0816 17:50:04.063690  285045 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-035693"
	I0816 17:50:04.063780  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:04.064289  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:04.064838  285045 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0816 17:50:04.065251  285045 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 17:50:04.065273  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 17:50:04.065331  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.076322  285045 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0816 17:50:04.076343  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0816 17:50:04.076414  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.077478  285045 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0816 17:50:04.084755  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0816 17:50:04.088433  285045 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 17:50:04.088462  285045 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 17:50:04.088550  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.126975  285045 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 17:50:04.127001  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0816 17:50:04.127070  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	W0816 17:50:04.136823  285045 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0816 17:50:04.149070  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0816 17:50:04.152467  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0816 17:50:04.154738  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0816 17:50:04.157272  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0816 17:50:04.161505  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0816 17:50:04.163393  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:04.167335  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0816 17:50:04.169134  285045 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0816 17:50:04.171255  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0816 17:50:04.171297  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0816 17:50:04.171460  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.171732  285045 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 17:50:04.172505  285045 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 17:50:04.172521  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0816 17:50:04.172741  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.200214  285045 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 17:50:04.202545  285045 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0816 17:50:04.204810  285045 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 17:50:04.204880  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0816 17:50:04.204983  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.209403  285045 addons.go:234] Setting addon default-storageclass=true in "addons-035693"
	I0816 17:50:04.209495  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:04.210006  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:04.237112  285045 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0816 17:50:04.238975  285045 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0816 17:50:04.239002  285045 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0816 17:50:04.239080  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.280222  285045 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0816 17:50:04.287414  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0816 17:50:04.296460  285045 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0816 17:50:04.296558  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.332716  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.333441  285045 out.go:177]   - Using image docker.io/busybox:stable
	I0816 17:50:04.339078  285045 out.go:177]   - Using image docker.io/registry:2.8.3
	I0816 17:50:04.344994  285045 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0816 17:50:04.345033  285045 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0816 17:50:04.345015  285045 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0816 17:50:04.351493  285045 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0816 17:50:04.351516  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0816 17:50:04.351582  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.352751  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.353468  285045 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0816 17:50:04.353484  285045 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0816 17:50:04.353544  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.353955  285045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:50:04.354243  285045 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 17:50:04.354256  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0816 17:50:04.354307  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.391898  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.397502  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.426126  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.441946  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.447101  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.467243  285045 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 17:50:04.467266  285045 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 17:50:04.467328  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:04.490239  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.491055  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.491762  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.519442  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.525684  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.530453  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:04.753878  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 17:50:04.759952  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 17:50:04.795375  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0816 17:50:04.840109  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 17:50:04.893011  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0816 17:50:04.893075  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0816 17:50:04.899706  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 17:50:04.944279  285045 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0816 17:50:04.944357  285045 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0816 17:50:04.948291  285045 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0816 17:50:04.948314  285045 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0816 17:50:04.959233  285045 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0816 17:50:04.959258  285045 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0816 17:50:04.962270  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 17:50:04.969528  285045 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 17:50:04.969550  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0816 17:50:05.013737  285045 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0816 17:50:05.013825  285045 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0816 17:50:05.076188  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0816 17:50:05.076262  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0816 17:50:05.106866  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 17:50:05.167922  285045 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0816 17:50:05.167997  285045 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0816 17:50:05.178505  285045 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0816 17:50:05.178586  285045 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0816 17:50:05.193959  285045 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0816 17:50:05.194024  285045 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0816 17:50:05.196107  285045 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 17:50:05.196175  285045 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 17:50:05.200533  285045 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0816 17:50:05.200659  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0816 17:50:05.291385  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0816 17:50:05.291425  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0816 17:50:05.315226  285045 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0816 17:50:05.315267  285045 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0816 17:50:05.381627  285045 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0816 17:50:05.381655  285045 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0816 17:50:05.384905  285045 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0816 17:50:05.384931  285045 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0816 17:50:05.420850  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0816 17:50:05.420878  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0816 17:50:05.434289  285045 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 17:50:05.434315  285045 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 17:50:05.435280  285045 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0816 17:50:05.435302  285045 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0816 17:50:05.448975  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0816 17:50:05.554915  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0816 17:50:05.554955  285045 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0816 17:50:05.564768  285045 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0816 17:50:05.564793  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0816 17:50:05.577640  285045 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0816 17:50:05.577666  285045 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0816 17:50:05.590036  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 17:50:05.595957  285045 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.241972389s)
	I0816 17:50:05.596001  285045 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.546078882s)
	I0816 17:50:05.596015  285045 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0816 17:50:05.597777  285045 node_ready.go:35] waiting up to 6m0s for node "addons-035693" to be "Ready" ...
	I0816 17:50:05.600396  285045 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0816 17:50:05.600420  285045 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0816 17:50:05.663110  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0816 17:50:05.663137  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0816 17:50:05.666380  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0816 17:50:05.679642  285045 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0816 17:50:05.679670  285045 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0816 17:50:05.724020  285045 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 17:50:05.724046  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0816 17:50:05.769778  285045 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 17:50:05.769810  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0816 17:50:05.784980  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0816 17:50:05.785008  285045 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0816 17:50:05.868389  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 17:50:05.911331  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 17:50:05.924895  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0816 17:50:05.924923  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0816 17:50:05.948237  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0816 17:50:05.948276  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0816 17:50:05.984352  285045 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 17:50:05.984378  285045 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0816 17:50:06.130782  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 17:50:07.153208  285045 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-035693" context rescaled to 1 replicas
	I0816 17:50:07.992134  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:09.749034  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.995070193s)
	I0816 17:50:09.749149  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.989111146s)
	I0816 17:50:09.749210  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.953754182s)
	I0816 17:50:10.175788  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:10.853353  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.953570704s)
	I0816 17:50:10.853401  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.891110182s)
	I0816 17:50:10.853440  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.746506365s)
	I0816 17:50:10.853487  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.404482969s)
	I0816 17:50:10.853949  285045 addons.go:475] Verifying addon registry=true in "addons-035693"
	I0816 17:50:10.853537  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.263470737s)
	I0816 17:50:10.854131  285045 addons.go:475] Verifying addon metrics-server=true in "addons-035693"
	I0816 17:50:10.853571  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.187164322s)
	I0816 17:50:10.854512  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.014327792s)
	I0816 17:50:10.854539  285045 addons.go:475] Verifying addon ingress=true in "addons-035693"
	I0816 17:50:10.856194  285045 out.go:177] * Verifying ingress addon...
	I0816 17:50:10.856292  285045 out.go:177] * Verifying registry addon...
	I0816 17:50:10.856339  285045 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-035693 service yakd-dashboard -n yakd-dashboard
	
	I0816 17:50:10.859728  285045 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0816 17:50:10.860653  285045 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0816 17:50:10.867769  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.999339134s)
	W0816 17:50:10.867803  285045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 17:50:10.867834  285045 retry.go:31] will retry after 188.586243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 17:50:10.867903  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.956538166s)
	W0816 17:50:10.884371  285045 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0816 17:50:10.887939  285045 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 17:50:10.888017  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:10.888654  285045 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0816 17:50:10.888707  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:11.056668  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 17:50:11.342682  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.211847634s)
	I0816 17:50:11.342768  285045 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-035693"
	I0816 17:50:11.345550  285045 out.go:177] * Verifying csi-hostpath-driver addon...
	I0816 17:50:11.348584  285045 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0816 17:50:11.435847  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:11.445367  285045 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 17:50:11.445444  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:11.448505  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:11.870817  285045 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 17:50:11.870894  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:11.885931  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:11.888013  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:11.917120  285045 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0816 17:50:11.917268  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:11.941936  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:12.131665  285045 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0816 17:50:12.191567  285045 addons.go:234] Setting addon gcp-auth=true in "addons-035693"
	I0816 17:50:12.191667  285045 host.go:66] Checking if "addons-035693" exists ...
	I0816 17:50:12.192217  285045 cli_runner.go:164] Run: docker container inspect addons-035693 --format={{.State.Status}}
	I0816 17:50:12.218677  285045 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0816 17:50:12.218728  285045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-035693
	I0816 17:50:12.242594  285045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/addons-035693/id_rsa Username:docker}
	I0816 17:50:12.352554  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:12.367301  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:12.368542  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:12.601639  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:12.852752  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:12.864026  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:12.864895  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:13.352495  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:13.366464  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:13.366758  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:13.852663  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:13.863624  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:13.865233  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:14.346922  285045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.290161191s)
	I0816 17:50:14.346994  285045 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.128298929s)
	I0816 17:50:14.349289  285045 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 17:50:14.350918  285045 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0816 17:50:14.352910  285045 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0816 17:50:14.352933  285045 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0816 17:50:14.356273  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:14.366861  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:14.368130  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:14.387707  285045 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0816 17:50:14.387799  285045 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0816 17:50:14.413340  285045 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 17:50:14.413414  285045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0816 17:50:14.435641  285045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 17:50:14.601717  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:14.853376  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:14.865634  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:14.869932  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:15.135140  285045 addons.go:475] Verifying addon gcp-auth=true in "addons-035693"
	I0816 17:50:15.139722  285045 out.go:177] * Verifying gcp-auth addon...
	I0816 17:50:15.143101  285045 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0816 17:50:15.148470  285045 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0816 17:50:15.148550  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:15.353023  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:15.366956  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:15.368104  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:15.647486  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:15.852688  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:15.864466  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:15.865718  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:16.146952  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:16.353352  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:16.365845  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:16.368164  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:16.602232  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:16.652315  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:16.852614  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:16.864555  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:16.865004  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:17.147584  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:17.353123  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:17.363878  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:17.365115  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:17.647102  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:17.852839  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:17.864116  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:17.864713  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:18.147445  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:18.353962  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:18.363939  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:18.364978  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:18.646831  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:18.852957  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:18.863965  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:18.864711  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:19.101010  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:19.148394  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:19.353427  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:19.364027  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:19.365099  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:19.646563  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:19.852220  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:19.868864  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:19.869904  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:20.147223  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:20.352384  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:20.364019  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:20.364437  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:20.647484  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:20.853178  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:20.863653  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:20.864296  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:21.147569  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:21.352834  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:21.363676  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:21.364186  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:21.600840  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:21.646510  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:21.852998  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:21.864320  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:21.865125  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:22.146699  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:22.353127  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:22.364041  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:22.365241  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:22.647476  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:22.852046  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:22.863788  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:22.864548  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:23.147123  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:23.352377  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:23.363918  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:23.364708  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:23.601198  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:23.646834  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:23.852805  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:23.863733  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:23.865151  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:24.147437  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:24.352665  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:24.364207  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:24.364834  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:24.646770  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:24.853151  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:24.864433  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:24.865195  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:25.146628  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:25.352921  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:25.364651  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:25.365065  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:25.646759  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:25.852786  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:25.863581  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:25.864661  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:26.100861  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:26.147114  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:26.352307  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:26.364602  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:26.364773  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:26.647486  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:26.852776  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:26.863873  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:26.864511  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:27.146593  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:27.353102  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:27.363761  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:27.365811  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:27.647544  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:27.852874  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:27.863545  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:27.864550  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:28.146707  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:28.351991  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:28.363719  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:28.364922  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:28.602283  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:28.646913  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:28.851892  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:28.865076  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:28.865328  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:29.148348  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:29.354066  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:29.363639  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:29.365312  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:29.647006  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:29.852538  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:29.864500  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:29.865282  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:30.149467  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:30.352761  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:30.363549  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:30.364299  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:30.646926  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:30.852150  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:30.864539  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:30.865073  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:31.101223  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:31.147117  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:31.352100  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:31.365309  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:31.365422  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:31.646607  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:31.852824  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:31.864486  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:31.865729  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:32.147178  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:32.352426  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:32.363853  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:32.364960  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:32.646513  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:32.852955  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:32.864768  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:32.865011  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:33.102251  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:33.147357  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:33.352997  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:33.369889  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:33.370322  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:33.646539  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:33.853119  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:33.870295  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:33.872076  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:34.148367  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:34.353911  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:34.364365  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:34.365682  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:34.646537  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:34.851930  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:34.863624  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:34.866458  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:35.102946  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:35.146999  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:35.352643  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:35.363233  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:35.364322  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:35.646410  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:35.852866  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:35.863946  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:35.864384  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:36.147540  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:36.352333  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:36.363496  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:36.364488  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:36.646742  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:36.852901  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:36.863597  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:36.864730  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:37.147404  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:37.353168  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:37.364472  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:37.365251  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:37.601774  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:37.647337  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:37.852377  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:37.863938  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:37.864704  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:38.146637  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:38.352283  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:38.363994  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:38.365128  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:38.647488  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:38.852647  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:38.862946  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:38.864241  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:39.147152  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:39.352126  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:39.363672  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:39.365533  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:39.647237  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:39.853085  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:39.863471  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:39.865361  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:40.101658  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:40.147453  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:40.352761  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:40.363341  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:40.364586  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:40.646829  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:40.852662  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:40.864804  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:40.865201  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:41.146635  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:41.352076  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:41.364230  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:41.364683  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:41.646284  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:41.852922  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:41.863914  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:41.865173  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:42.103239  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:42.147593  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:42.352830  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:42.364947  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:42.365877  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:42.646574  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:42.852097  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:42.863627  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:42.865514  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:43.146576  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:43.352484  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:43.363889  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:43.365628  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:43.646057  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:43.852542  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:43.864649  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:43.865488  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:44.146125  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:44.352490  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:44.363840  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:44.364917  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:44.601456  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:44.646263  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:44.852818  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:44.864056  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:44.866039  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:45.148168  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:45.353692  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:45.366681  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:45.367604  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:45.646479  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:45.852667  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:45.863580  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:45.865579  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:46.146568  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:46.352057  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:46.364347  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:46.365419  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:46.647100  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:46.853071  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:46.863652  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:46.864363  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:47.101731  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:47.146371  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:47.353031  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:47.363677  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:47.364756  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:47.648498  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:47.854674  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:47.864924  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:47.865184  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:48.147142  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:48.352943  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:48.364655  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:48.366594  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:48.647160  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:48.852711  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:48.863396  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:48.864054  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:49.146931  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:49.352015  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:49.364861  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:49.365257  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:49.601265  285045 node_ready.go:53] node "addons-035693" has status "Ready":"False"
	I0816 17:50:49.646901  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:49.852171  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:49.864412  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:49.866114  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:50.147500  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:50.361770  285045 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 17:50:50.361848  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:50.381060  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:50.381329  285045 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 17:50:50.381371  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:50.654928  285045 node_ready.go:49] node "addons-035693" has status "Ready":"True"
	I0816 17:50:50.655001  285045 node_ready.go:38] duration metric: took 45.057196224s for node "addons-035693" to be "Ready" ...
	I0816 17:50:50.655042  285045 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:50:50.668394  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:50.685870  285045 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rbz4z" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:50.854186  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:50.865103  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:50.865403  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:51.162401  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:51.360715  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:51.367185  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:51.367618  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:51.646949  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:51.855100  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:51.863976  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:51.864665  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:52.147813  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:52.398801  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:52.399871  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:52.411686  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:52.647636  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:52.694885  285045 pod_ready.go:103] pod "coredns-6f6b679f8f-rbz4z" in "kube-system" namespace has status "Ready":"False"
	I0816 17:50:52.874250  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:52.880709  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:52.882252  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:53.147593  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:53.381303  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:53.466159  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:53.468134  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:53.646648  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:53.696610  285045 pod_ready.go:93] pod "coredns-6f6b679f8f-rbz4z" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:53.696685  285045 pod_ready.go:82] duration metric: took 3.010740776s for pod "coredns-6f6b679f8f-rbz4z" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.696722  285045 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.703115  285045 pod_ready.go:93] pod "etcd-addons-035693" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:53.703181  285045 pod_ready.go:82] duration metric: took 6.428986ms for pod "etcd-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.703210  285045 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.717140  285045 pod_ready.go:93] pod "kube-apiserver-addons-035693" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:53.717215  285045 pod_ready.go:82] duration metric: took 13.984627ms for pod "kube-apiserver-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.717241  285045 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.729656  285045 pod_ready.go:93] pod "kube-controller-manager-addons-035693" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:53.729723  285045 pod_ready.go:82] duration metric: took 12.461066ms for pod "kube-controller-manager-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.729751  285045 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gk9xc" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.740221  285045 pod_ready.go:93] pod "kube-proxy-gk9xc" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:53.740295  285045 pod_ready.go:82] duration metric: took 10.524667ms for pod "kube-proxy-gk9xc" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.740321  285045 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:53.854423  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:53.867652  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:53.869247  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:54.092453  285045 pod_ready.go:93] pod "kube-scheduler-addons-035693" in "kube-system" namespace has status "Ready":"True"
	I0816 17:50:54.092534  285045 pod_ready.go:82] duration metric: took 352.19134ms for pod "kube-scheduler-addons-035693" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:54.092584  285045 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace to be "Ready" ...
	I0816 17:50:54.152350  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:54.354432  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:54.367461  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:54.373575  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:54.647430  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:54.855566  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:54.867874  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:54.869224  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:55.148416  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:55.354014  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:55.376324  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:55.377729  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:55.647150  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:55.864347  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:55.869854  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:55.872677  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:56.100007  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:50:56.148850  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:56.354281  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:56.364669  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:56.366005  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:56.647436  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:56.853437  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:56.864132  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:56.865807  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:57.147105  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:57.353300  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:57.371437  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:57.372589  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:57.647311  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:57.854955  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:57.869292  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:57.871730  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:58.116422  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:50:58.147738  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:58.353610  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:58.368324  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:58.371470  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:58.647816  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:58.854110  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:58.865568  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:58.867414  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:59.147748  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:59.355375  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:59.365193  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:50:59.371358  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:59.648182  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:50:59.879769  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:50:59.890345  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:50:59.891917  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:00.121083  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:00.151927  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:00.359406  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:00.367672  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:51:00.371519  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:00.647682  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:00.856113  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:00.870764  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:51:00.872065  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:01.148808  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:01.361976  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:01.381777  285045 kapi.go:107] duration metric: took 50.52111974s to wait for kubernetes.io/minikube-addons=registry ...
	I0816 17:51:01.382947  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:01.656093  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:01.853183  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:01.863809  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:02.147012  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:02.354520  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:02.365529  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:02.602207  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:02.647477  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:02.862035  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:02.880385  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:03.147589  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:03.354102  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:03.364177  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:03.647858  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:03.862837  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:03.867631  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:04.147267  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:04.362564  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:04.365068  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:04.647733  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:04.853960  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:04.864844  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:05.100269  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:05.147109  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:05.357997  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:05.364319  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:05.647443  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:05.853891  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:05.864263  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:06.147695  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:06.360106  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:06.366377  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:06.647089  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:06.854089  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:06.864280  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:07.148091  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:07.362526  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:07.383316  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:07.599540  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:07.648080  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:07.854168  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:07.866208  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:08.147834  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:08.354472  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:08.364431  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:08.674021  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:08.855090  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:08.864753  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:09.146373  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:09.358309  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:09.366376  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:09.647286  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:09.854689  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:09.863793  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:10.100505  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:10.147191  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:10.354491  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:10.371873  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:10.647548  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:10.853676  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:10.864329  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:11.148143  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:11.355051  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:11.383508  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:11.647123  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:11.855727  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:11.864799  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:12.101201  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:12.147483  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:12.354516  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:12.364256  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:12.647634  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:12.853866  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:12.865522  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:13.147085  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:13.357241  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:13.370668  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:13.648501  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:13.853972  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:13.865543  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:14.105406  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:14.149218  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:14.354265  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:14.363944  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:14.647914  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:14.853703  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:14.864264  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:15.147559  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:15.355786  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:15.364385  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:15.648220  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:15.854651  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:15.865556  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:16.147565  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:16.354505  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:16.365487  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:16.600425  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:16.647157  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:16.855260  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:16.866078  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:17.147976  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:17.354504  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:17.365453  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:17.649461  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:17.860811  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:17.867091  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:18.147113  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:18.353683  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:18.364744  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:18.647586  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:18.854338  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:18.864978  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:19.100434  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:19.147876  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:19.353834  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:19.364548  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:19.647377  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:19.854611  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:19.865182  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:20.147838  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:20.354058  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:20.381672  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:20.647689  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:20.854266  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:20.866839  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:21.105357  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:21.147832  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:21.354478  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:21.372022  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:21.647282  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:21.853838  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:21.869542  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:22.147038  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:22.353799  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:22.364016  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:22.646531  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:22.855142  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:22.865256  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:23.147244  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:23.354654  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:23.366452  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:23.601852  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:23.648171  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:23.855905  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:23.954707  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:24.147135  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:24.353204  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:24.364951  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:24.647645  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:24.853845  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:24.864558  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:25.148205  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:25.353933  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:25.364146  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:25.646839  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:25.854192  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:25.864475  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:26.100324  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:26.146914  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:26.357422  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:26.364738  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:26.647825  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:26.856113  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:26.864979  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:27.148216  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:27.353495  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:27.364675  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:27.650669  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:27.854872  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:27.864273  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:28.146994  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:28.354251  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:28.364897  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:28.601779  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:28.648191  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:28.855330  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:28.864816  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:29.154052  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:29.356117  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:29.364834  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:29.647841  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:29.854699  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:29.864729  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:30.147558  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:30.353855  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:30.366512  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:30.653687  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:30.855447  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:30.865076  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:31.100424  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:31.147468  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:31.355209  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:31.364250  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:31.649077  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:31.853515  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:31.864751  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:32.147370  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:32.354256  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:32.364185  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:32.648917  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:32.854193  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:32.864502  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:33.149376  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:33.359353  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:33.365975  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:33.600215  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:33.646999  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:33.854939  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:33.864887  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:34.148664  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:34.355152  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:34.363993  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:34.646907  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:34.854193  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:34.864635  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:35.147232  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:35.353890  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:35.364395  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:35.647363  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:35.855025  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:35.864812  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:36.101431  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:36.148967  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:36.354614  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:36.366744  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:36.647758  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:36.855666  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:36.865514  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:37.147555  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:37.355058  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:37.364605  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:37.651097  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:37.854423  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:37.864154  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:38.147225  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:38.353461  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:38.365764  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:38.598836  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:38.646863  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:38.853856  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:38.864046  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:39.150957  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:39.354006  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:39.364804  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:39.647305  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:39.870022  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:39.880372  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:40.146923  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:40.354690  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:40.364302  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:40.602195  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:40.647883  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:40.860433  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:40.886975  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:41.174060  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:41.356139  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:41.365699  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:41.661435  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:41.862306  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:41.865526  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:42.160109  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:42.357369  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:42.365075  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:42.646561  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:42.853626  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:42.864032  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:43.099898  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:43.147916  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:43.353997  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:43.364431  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:43.647808  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:43.854796  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:43.865323  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:44.146837  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:44.354643  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:44.364493  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:44.649019  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:44.853742  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:44.864619  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:45.123716  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:45.166897  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:45.410276  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:45.412011  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:45.652252  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:45.855105  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:45.864072  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:46.147818  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:46.357936  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:46.367566  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:46.646395  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:46.854799  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:46.866170  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:47.148368  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:47.354900  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:47.374701  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:47.600502  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:47.652277  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:47.861296  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:47.870863  285045 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:51:48.158936  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:48.356329  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:48.367114  285045 kapi.go:107] duration metric: took 1m37.50738507s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0816 17:51:48.647142  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:48.854979  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:49.146753  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:49.354039  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:49.601386  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:49.693167  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:49.854050  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:50.148315  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:50.353268  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:50.646860  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:50.854277  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:51.149149  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:51.363022  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:51.601696  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:51.648531  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:51.853798  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:52.148017  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:52.353357  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:52.649751  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:52.855277  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:53.148870  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:53.358278  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:53.647023  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:53.854289  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:54.100268  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:54.147149  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:54.354672  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:54.648752  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:54.853943  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:55.148502  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:55.355270  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:55.648737  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:55.861222  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:56.113451  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:56.148300  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:56.358511  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:56.649027  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:56.853754  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:57.147643  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:57.354241  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:57.646812  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:57.854290  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:58.154998  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:58.354108  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:58.600942  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:51:58.646862  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:58.854252  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:59.147596  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:51:59.354095  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:51:59.647250  285045 kapi.go:107] duration metric: took 1m44.504161402s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0816 17:51:59.649340  285045 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-035693 cluster.
	I0816 17:51:59.651040  285045 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0816 17:51:59.652803  285045 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0816 17:51:59.853009  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:52:00.357790  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:52:00.621925  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:52:00.854564  285045 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:52:01.353962  285045 kapi.go:107] duration metric: took 1m50.00539667s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0816 17:52:01.356271  285045 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, metrics-server, yakd, inspektor-gadget, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0816 17:52:01.358985  285045 addons.go:510] duration metric: took 1m57.47047744s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin metrics-server yakd inspektor-gadget default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0816 17:52:03.100224  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:52:05.599371  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:52:08.099221  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:52:10.100834  285045 pod_ready.go:103] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"False"
	I0816 17:52:11.099041  285045 pod_ready.go:93] pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace has status "Ready":"True"
	I0816 17:52:11.099074  285045 pod_ready.go:82] duration metric: took 1m17.006461408s for pod "metrics-server-8988944d9-ssk4x" in "kube-system" namespace to be "Ready" ...
	I0816 17:52:11.099089  285045 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jsx2r" in "kube-system" namespace to be "Ready" ...
	I0816 17:52:11.104870  285045 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-jsx2r" in "kube-system" namespace has status "Ready":"True"
	I0816 17:52:11.104898  285045 pod_ready.go:82] duration metric: took 5.801913ms for pod "nvidia-device-plugin-daemonset-jsx2r" in "kube-system" namespace to be "Ready" ...
	I0816 17:52:11.104922  285045 pod_ready.go:39] duration metric: took 1m20.449849672s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:52:11.104938  285045 api_server.go:52] waiting for apiserver process to appear ...
	I0816 17:52:11.104974  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 17:52:11.105042  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 17:52:11.163469  285045 cri.go:89] found id: "16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:11.163492  285045 cri.go:89] found id: ""
	I0816 17:52:11.163502  285045 logs.go:276] 1 containers: [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2]
	I0816 17:52:11.163596  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.166996  285045 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 17:52:11.167077  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 17:52:11.206891  285045 cri.go:89] found id: "561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:11.206913  285045 cri.go:89] found id: ""
	I0816 17:52:11.206922  285045 logs.go:276] 1 containers: [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea]
	I0816 17:52:11.206978  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.210709  285045 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 17:52:11.210781  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 17:52:11.250990  285045 cri.go:89] found id: "012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:11.251064  285045 cri.go:89] found id: ""
	I0816 17:52:11.251100  285045 logs.go:276] 1 containers: [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52]
	I0816 17:52:11.251195  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.256402  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 17:52:11.256473  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 17:52:11.299843  285045 cri.go:89] found id: "37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:11.299867  285045 cri.go:89] found id: ""
	I0816 17:52:11.299875  285045 logs.go:276] 1 containers: [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db]
	I0816 17:52:11.299931  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.303593  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 17:52:11.303668  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 17:52:11.347845  285045 cri.go:89] found id: "2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:11.347865  285045 cri.go:89] found id: ""
	I0816 17:52:11.347873  285045 logs.go:276] 1 containers: [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3]
	I0816 17:52:11.347928  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.351603  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 17:52:11.351723  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 17:52:11.400968  285045 cri.go:89] found id: "8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:11.401037  285045 cri.go:89] found id: ""
	I0816 17:52:11.401058  285045 logs.go:276] 1 containers: [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e]
	I0816 17:52:11.401145  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.405081  285045 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 17:52:11.405200  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 17:52:11.448800  285045 cri.go:89] found id: "3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:11.448869  285045 cri.go:89] found id: ""
	I0816 17:52:11.448884  285045 logs.go:276] 1 containers: [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac]
	I0816 17:52:11.448958  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:11.452524  285045 logs.go:123] Gathering logs for coredns [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52] ...
	I0816 17:52:11.452551  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:11.514732  285045 logs.go:123] Gathering logs for kube-scheduler [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db] ...
	I0816 17:52:11.514775  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:11.566526  285045 logs.go:123] Gathering logs for CRI-O ...
	I0816 17:52:11.566562  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 17:52:11.667037  285045 logs.go:123] Gathering logs for dmesg ...
	I0816 17:52:11.667076  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 17:52:11.683945  285045 logs.go:123] Gathering logs for describe nodes ...
	I0816 17:52:11.683977  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 17:52:11.885764  285045 logs.go:123] Gathering logs for kube-apiserver [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2] ...
	I0816 17:52:11.885796  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:11.949048  285045 logs.go:123] Gathering logs for kube-controller-manager [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e] ...
	I0816 17:52:11.949087  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:12.029026  285045 logs.go:123] Gathering logs for kindnet [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac] ...
	I0816 17:52:12.029089  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:12.125610  285045 logs.go:123] Gathering logs for container status ...
	I0816 17:52:12.125700  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 17:52:12.209774  285045 logs.go:123] Gathering logs for kubelet ...
	I0816 17:52:12.209809  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 17:52:12.259765  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.299922    1482 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.260009  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.299969    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.260189  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300343    1482 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.260402  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300373    1482 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.260622  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300698    1482 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.260853  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.261036  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.261261  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.261430  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.261637  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:12.300441  285045 logs.go:123] Gathering logs for etcd [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea] ...
	I0816 17:52:12.300470  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:12.351415  285045 logs.go:123] Gathering logs for kube-proxy [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3] ...
	I0816 17:52:12.351448  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:12.395004  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:12.395037  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0816 17:52:12.395142  285045 out.go:270] X Problems detected in kubelet:
	W0816 17:52:12.395162  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.395175  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.395183  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:12.395195  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:12.395201  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:12.395224  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:12.395232  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:52:22.396545  285045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:52:22.412549  285045 api_server.go:72] duration metric: took 2m18.524415052s to wait for apiserver process to appear ...
	I0816 17:52:22.412593  285045 api_server.go:88] waiting for apiserver healthz status ...
	I0816 17:52:22.412632  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 17:52:22.412697  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 17:52:22.461140  285045 cri.go:89] found id: "16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:22.461166  285045 cri.go:89] found id: ""
	I0816 17:52:22.461175  285045 logs.go:276] 1 containers: [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2]
	I0816 17:52:22.461234  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.465222  285045 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 17:52:22.465290  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 17:52:22.512115  285045 cri.go:89] found id: "561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:22.512136  285045 cri.go:89] found id: ""
	I0816 17:52:22.512144  285045 logs.go:276] 1 containers: [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea]
	I0816 17:52:22.512201  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.515891  285045 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 17:52:22.515970  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 17:52:22.556457  285045 cri.go:89] found id: "012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:22.556478  285045 cri.go:89] found id: ""
	I0816 17:52:22.556499  285045 logs.go:276] 1 containers: [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52]
	I0816 17:52:22.556556  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.560650  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 17:52:22.560722  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 17:52:22.600623  285045 cri.go:89] found id: "37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:22.600649  285045 cri.go:89] found id: ""
	I0816 17:52:22.600667  285045 logs.go:276] 1 containers: [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db]
	I0816 17:52:22.600728  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.604611  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 17:52:22.604693  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 17:52:22.652837  285045 cri.go:89] found id: "2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:22.652861  285045 cri.go:89] found id: ""
	I0816 17:52:22.652872  285045 logs.go:276] 1 containers: [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3]
	I0816 17:52:22.652947  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.656998  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 17:52:22.657093  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 17:52:22.704427  285045 cri.go:89] found id: "8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:22.704500  285045 cri.go:89] found id: ""
	I0816 17:52:22.704514  285045 logs.go:276] 1 containers: [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e]
	I0816 17:52:22.704613  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.708328  285045 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 17:52:22.708395  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 17:52:22.749812  285045 cri.go:89] found id: "3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:22.749836  285045 cri.go:89] found id: ""
	I0816 17:52:22.749844  285045 logs.go:276] 1 containers: [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac]
	I0816 17:52:22.749923  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:22.753783  285045 logs.go:123] Gathering logs for describe nodes ...
	I0816 17:52:22.753814  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 17:52:22.903048  285045 logs.go:123] Gathering logs for kube-scheduler [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db] ...
	I0816 17:52:22.903081  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:22.959876  285045 logs.go:123] Gathering logs for kube-controller-manager [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e] ...
	I0816 17:52:22.959910  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:23.033378  285045 logs.go:123] Gathering logs for kindnet [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac] ...
	I0816 17:52:23.033416  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:23.079523  285045 logs.go:123] Gathering logs for kubelet ...
	I0816 17:52:23.079557  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 17:52:23.130020  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.299922    1482 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.130290  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.299969    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.130469  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300343    1482 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.130682  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300373    1482 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.130870  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300698    1482 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.131101  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.131287  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.131521  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.131740  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.131956  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:23.172004  285045 logs.go:123] Gathering logs for dmesg ...
	I0816 17:52:23.172042  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 17:52:23.189120  285045 logs.go:123] Gathering logs for kube-apiserver [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2] ...
	I0816 17:52:23.189151  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:23.262456  285045 logs.go:123] Gathering logs for etcd [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea] ...
	I0816 17:52:23.262488  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:23.312676  285045 logs.go:123] Gathering logs for coredns [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52] ...
	I0816 17:52:23.312715  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:23.362275  285045 logs.go:123] Gathering logs for kube-proxy [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3] ...
	I0816 17:52:23.362306  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:23.408344  285045 logs.go:123] Gathering logs for CRI-O ...
	I0816 17:52:23.408376  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 17:52:23.510483  285045 logs.go:123] Gathering logs for container status ...
	I0816 17:52:23.510570  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 17:52:23.577126  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:23.577157  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0816 17:52:23.577241  285045 out.go:270] X Problems detected in kubelet:
	W0816 17:52:23.577255  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.577285  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.577294  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:23.577301  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:23.577308  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:23.577320  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:23.577327  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:52:33.578632  285045 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 17:52:33.586611  285045 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0816 17:52:33.587709  285045 api_server.go:141] control plane version: v1.31.0
	I0816 17:52:33.587737  285045 api_server.go:131] duration metric: took 11.175137198s to wait for apiserver health ...
	I0816 17:52:33.587746  285045 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 17:52:33.587768  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 17:52:33.587825  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 17:52:33.624854  285045 cri.go:89] found id: "16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:33.624879  285045 cri.go:89] found id: ""
	I0816 17:52:33.624896  285045 logs.go:276] 1 containers: [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2]
	I0816 17:52:33.624955  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.628433  285045 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 17:52:33.628500  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 17:52:33.668071  285045 cri.go:89] found id: "561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:33.668091  285045 cri.go:89] found id: ""
	I0816 17:52:33.668100  285045 logs.go:276] 1 containers: [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea]
	I0816 17:52:33.668156  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.671765  285045 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 17:52:33.671833  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 17:52:33.714887  285045 cri.go:89] found id: "012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:33.714906  285045 cri.go:89] found id: ""
	I0816 17:52:33.714915  285045 logs.go:276] 1 containers: [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52]
	I0816 17:52:33.714973  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.718624  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 17:52:33.718692  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 17:52:33.759957  285045 cri.go:89] found id: "37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:33.759982  285045 cri.go:89] found id: ""
	I0816 17:52:33.759991  285045 logs.go:276] 1 containers: [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db]
	I0816 17:52:33.760047  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.763595  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 17:52:33.763665  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 17:52:33.804084  285045 cri.go:89] found id: "2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:33.804111  285045 cri.go:89] found id: ""
	I0816 17:52:33.804119  285045 logs.go:276] 1 containers: [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3]
	I0816 17:52:33.804177  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.808159  285045 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 17:52:33.808233  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 17:52:33.847110  285045 cri.go:89] found id: "8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:33.847132  285045 cri.go:89] found id: ""
	I0816 17:52:33.847140  285045 logs.go:276] 1 containers: [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e]
	I0816 17:52:33.847232  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.851158  285045 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 17:52:33.851280  285045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 17:52:33.890592  285045 cri.go:89] found id: "3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:33.890622  285045 cri.go:89] found id: ""
	I0816 17:52:33.890630  285045 logs.go:276] 1 containers: [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac]
	I0816 17:52:33.890692  285045 ssh_runner.go:195] Run: which crictl
	I0816 17:52:33.894418  285045 logs.go:123] Gathering logs for kubelet ...
	I0816 17:52:33.894453  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 17:52:33.940428  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.299922    1482 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:33.940703  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.299969    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:33.940881  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300343    1482 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-035693' and this object
	W0816 17:52:33.941099  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300373    1482 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:33.941287  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.300698    1482 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:33.941514  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:33.941702  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:33.941929  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:33.942131  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:33.942353  285045 logs.go:138] Found kubelet problem: Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:33.984453  285045 logs.go:123] Gathering logs for dmesg ...
	I0816 17:52:33.984488  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 17:52:34.000832  285045 logs.go:123] Gathering logs for describe nodes ...
	I0816 17:52:34.000862  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 17:52:34.147169  285045 logs.go:123] Gathering logs for kube-apiserver [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2] ...
	I0816 17:52:34.147198  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2"
	I0816 17:52:34.200849  285045 logs.go:123] Gathering logs for coredns [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52] ...
	I0816 17:52:34.200884  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52"
	I0816 17:52:34.250310  285045 logs.go:123] Gathering logs for kube-controller-manager [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e] ...
	I0816 17:52:34.250341  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e"
	I0816 17:52:34.321425  285045 logs.go:123] Gathering logs for etcd [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea] ...
	I0816 17:52:34.321464  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea"
	I0816 17:52:34.370297  285045 logs.go:123] Gathering logs for kube-scheduler [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db] ...
	I0816 17:52:34.370371  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db"
	I0816 17:52:34.429642  285045 logs.go:123] Gathering logs for kube-proxy [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3] ...
	I0816 17:52:34.429675  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3"
	I0816 17:52:34.470183  285045 logs.go:123] Gathering logs for kindnet [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac] ...
	I0816 17:52:34.470214  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac"
	I0816 17:52:34.517728  285045 logs.go:123] Gathering logs for CRI-O ...
	I0816 17:52:34.517767  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 17:52:34.613499  285045 logs.go:123] Gathering logs for container status ...
	I0816 17:52:34.613537  285045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 17:52:34.664287  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:34.664316  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0816 17:52:34.664462  285045 out.go:270] X Problems detected in kubelet:
	W0816 17:52:34.664480  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.300726    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:34.664626  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301026    1482 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-035693" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-035693' and this object
	W0816 17:52:34.664637  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301072    1482 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	W0816 17:52:34.664649  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: W0816 17:50:50.301389    1482 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-035693" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-035693' and this object
	W0816 17:52:34.664657  285045 out.go:270]   Aug 16 17:50:50 addons-035693 kubelet[1482]: E0816 17:50:50.301415    1482 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-035693\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-035693' and this object" logger="UnhandledError"
	I0816 17:52:34.664663  285045 out.go:358] Setting ErrFile to fd 2...
	I0816 17:52:34.664670  285045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:52:44.677793  285045 system_pods.go:59] 18 kube-system pods found
	I0816 17:52:44.677834  285045 system_pods.go:61] "coredns-6f6b679f8f-rbz4z" [92644452-aa36-4753-a264-a18cccb9492c] Running
	I0816 17:52:44.677842  285045 system_pods.go:61] "csi-hostpath-attacher-0" [664885ca-b697-49ae-880d-8ecc99fc3626] Running
	I0816 17:52:44.677849  285045 system_pods.go:61] "csi-hostpath-resizer-0" [2f444058-2a11-47d1-9aa5-5819a61fd0eb] Running
	I0816 17:52:44.677854  285045 system_pods.go:61] "csi-hostpathplugin-zhj5f" [f51366cb-7c4d-495a-82b3-e94e9f6e557a] Running
	I0816 17:52:44.677859  285045 system_pods.go:61] "etcd-addons-035693" [921b06ac-bc6a-42f2-b195-bf0df0c41429] Running
	I0816 17:52:44.677863  285045 system_pods.go:61] "kindnet-ss96t" [a57b0d98-03aa-45a1-a52d-fa5c7752f339] Running
	I0816 17:52:44.677868  285045 system_pods.go:61] "kube-apiserver-addons-035693" [46468507-7fa1-48a0-86a7-bb8c24da898a] Running
	I0816 17:52:44.677898  285045 system_pods.go:61] "kube-controller-manager-addons-035693" [437a7398-1257-4c14-9bdb-dc231abacfc3] Running
	I0816 17:52:44.677903  285045 system_pods.go:61] "kube-ingress-dns-minikube" [678e56e7-144e-4853-bb85-8157ca9cdd5d] Running
	I0816 17:52:44.677908  285045 system_pods.go:61] "kube-proxy-gk9xc" [fdb8dfd7-8793-4882-9b5f-d512e5caff6f] Running
	I0816 17:52:44.677912  285045 system_pods.go:61] "kube-scheduler-addons-035693" [9996406e-7cbf-43cf-8100-4a4e1fed2cb7] Running
	I0816 17:52:44.677916  285045 system_pods.go:61] "metrics-server-8988944d9-ssk4x" [0bdf104e-0061-4330-aaa3-3ed64ee249e7] Running
	I0816 17:52:44.677920  285045 system_pods.go:61] "nvidia-device-plugin-daemonset-jsx2r" [c4f0b8cd-7cfb-4b35-b194-ec9b1febfd6b] Running
	I0816 17:52:44.677924  285045 system_pods.go:61] "registry-6fb4cdfc84-tm8w6" [7a1098d6-9eed-44ed-b050-d7eb7f621f53] Running
	I0816 17:52:44.677927  285045 system_pods.go:61] "registry-proxy-t2nrw" [b85d1b9d-5cbc-4b35-a578-9eb458257f07] Running
	I0816 17:52:44.677931  285045 system_pods.go:61] "snapshot-controller-56fcc65765-cnwmk" [1d49b80b-be25-4e9d-ba9b-44170fa68be0] Running
	I0816 17:52:44.677935  285045 system_pods.go:61] "snapshot-controller-56fcc65765-gr2ps" [1efa743e-f16d-4942-8095-a87be8bd0e66] Running
	I0816 17:52:44.677939  285045 system_pods.go:61] "storage-provisioner" [7ce79565-40dd-4899-9f49-003e0e94fdd9] Running
	I0816 17:52:44.677945  285045 system_pods.go:74] duration metric: took 11.090193132s to wait for pod list to return data ...
	I0816 17:52:44.677952  285045 default_sa.go:34] waiting for default service account to be created ...
	I0816 17:52:44.681217  285045 default_sa.go:45] found service account: "default"
	I0816 17:52:44.681245  285045 default_sa.go:55] duration metric: took 3.28594ms for default service account to be created ...
	I0816 17:52:44.681255  285045 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 17:52:44.691915  285045 system_pods.go:86] 18 kube-system pods found
	I0816 17:52:44.692014  285045 system_pods.go:89] "coredns-6f6b679f8f-rbz4z" [92644452-aa36-4753-a264-a18cccb9492c] Running
	I0816 17:52:44.692038  285045 system_pods.go:89] "csi-hostpath-attacher-0" [664885ca-b697-49ae-880d-8ecc99fc3626] Running
	I0816 17:52:44.692082  285045 system_pods.go:89] "csi-hostpath-resizer-0" [2f444058-2a11-47d1-9aa5-5819a61fd0eb] Running
	I0816 17:52:44.692107  285045 system_pods.go:89] "csi-hostpathplugin-zhj5f" [f51366cb-7c4d-495a-82b3-e94e9f6e557a] Running
	I0816 17:52:44.692127  285045 system_pods.go:89] "etcd-addons-035693" [921b06ac-bc6a-42f2-b195-bf0df0c41429] Running
	I0816 17:52:44.692161  285045 system_pods.go:89] "kindnet-ss96t" [a57b0d98-03aa-45a1-a52d-fa5c7752f339] Running
	I0816 17:52:44.692186  285045 system_pods.go:89] "kube-apiserver-addons-035693" [46468507-7fa1-48a0-86a7-bb8c24da898a] Running
	I0816 17:52:44.692205  285045 system_pods.go:89] "kube-controller-manager-addons-035693" [437a7398-1257-4c14-9bdb-dc231abacfc3] Running
	I0816 17:52:44.692244  285045 system_pods.go:89] "kube-ingress-dns-minikube" [678e56e7-144e-4853-bb85-8157ca9cdd5d] Running
	I0816 17:52:44.692265  285045 system_pods.go:89] "kube-proxy-gk9xc" [fdb8dfd7-8793-4882-9b5f-d512e5caff6f] Running
	I0816 17:52:44.692284  285045 system_pods.go:89] "kube-scheduler-addons-035693" [9996406e-7cbf-43cf-8100-4a4e1fed2cb7] Running
	I0816 17:52:44.692296  285045 system_pods.go:89] "metrics-server-8988944d9-ssk4x" [0bdf104e-0061-4330-aaa3-3ed64ee249e7] Running
	I0816 17:52:44.692301  285045 system_pods.go:89] "nvidia-device-plugin-daemonset-jsx2r" [c4f0b8cd-7cfb-4b35-b194-ec9b1febfd6b] Running
	I0816 17:52:44.692305  285045 system_pods.go:89] "registry-6fb4cdfc84-tm8w6" [7a1098d6-9eed-44ed-b050-d7eb7f621f53] Running
	I0816 17:52:44.692309  285045 system_pods.go:89] "registry-proxy-t2nrw" [b85d1b9d-5cbc-4b35-a578-9eb458257f07] Running
	I0816 17:52:44.692315  285045 system_pods.go:89] "snapshot-controller-56fcc65765-cnwmk" [1d49b80b-be25-4e9d-ba9b-44170fa68be0] Running
	I0816 17:52:44.692319  285045 system_pods.go:89] "snapshot-controller-56fcc65765-gr2ps" [1efa743e-f16d-4942-8095-a87be8bd0e66] Running
	I0816 17:52:44.692323  285045 system_pods.go:89] "storage-provisioner" [7ce79565-40dd-4899-9f49-003e0e94fdd9] Running
	I0816 17:52:44.692334  285045 system_pods.go:126] duration metric: took 11.073024ms to wait for k8s-apps to be running ...
	I0816 17:52:44.692343  285045 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 17:52:44.692404  285045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:52:44.705414  285045 system_svc.go:56] duration metric: took 13.060632ms WaitForService to wait for kubelet
	I0816 17:52:44.705445  285045 kubeadm.go:582] duration metric: took 2m40.817316329s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:52:44.705469  285045 node_conditions.go:102] verifying NodePressure condition ...
	I0816 17:52:44.709137  285045 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0816 17:52:44.709188  285045 node_conditions.go:123] node cpu capacity is 2
	I0816 17:52:44.709200  285045 node_conditions.go:105] duration metric: took 3.724607ms to run NodePressure ...
	I0816 17:52:44.709213  285045 start.go:241] waiting for startup goroutines ...
	I0816 17:52:44.709221  285045 start.go:246] waiting for cluster config update ...
	I0816 17:52:44.709238  285045 start.go:255] writing updated cluster config ...
	I0816 17:52:44.709541  285045 ssh_runner.go:195] Run: rm -f paused
	I0816 17:52:45.116965  285045 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 17:52:45.120058  285045 out.go:177] * Done! kubectl is now configured to use "addons-035693" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.087518917Z" level=info msg="Removed container 4ea49a234283c6a409e4335f657228e09f80fcf773df90d5023cbb92f6200c3c: ingress-nginx/ingress-nginx-admission-patch-qkv45/patch" id=3d8cb593-74e1-47f6-953a-64ab1220ff2c name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.089417665Z" level=info msg="Removing container: 852a4d200668cf681597d255d21967df1da837854e1cac2b0e569bfb52e8d3f8" id=87ba46f3-497b-4144-87f0-232cf449b83e name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.112287806Z" level=info msg="Removed container 852a4d200668cf681597d255d21967df1da837854e1cac2b0e569bfb52e8d3f8: ingress-nginx/ingress-nginx-admission-create-mkgjx/create" id=87ba46f3-497b-4144-87f0-232cf449b83e name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.113734822Z" level=info msg="Stopping pod sandbox: 7e3f340815cccace0cb553e3309bb0ef1bc588e46471518489cddfbc49c17bb9" id=487e224e-f0df-4f57-92e5-437f8f2b52a1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.113773542Z" level=info msg="Stopped pod sandbox (already stopped): 7e3f340815cccace0cb553e3309bb0ef1bc588e46471518489cddfbc49c17bb9" id=487e224e-f0df-4f57-92e5-437f8f2b52a1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.114058808Z" level=info msg="Removing pod sandbox: 7e3f340815cccace0cb553e3309bb0ef1bc588e46471518489cddfbc49c17bb9" id=5bd0d113-e00e-4fd0-997c-9b0014899504 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.122499612Z" level=info msg="Removed pod sandbox: 7e3f340815cccace0cb553e3309bb0ef1bc588e46471518489cddfbc49c17bb9" id=5bd0d113-e00e-4fd0-997c-9b0014899504 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.123010453Z" level=info msg="Stopping pod sandbox: a629521fd300726232807d723b2f60943451dc5c9a417ecdf73b2eed7c60bba1" id=b6a689e3-b9ee-448d-9980-48e316297121 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.123042051Z" level=info msg="Stopped pod sandbox (already stopped): a629521fd300726232807d723b2f60943451dc5c9a417ecdf73b2eed7c60bba1" id=b6a689e3-b9ee-448d-9980-48e316297121 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.123477872Z" level=info msg="Removing pod sandbox: a629521fd300726232807d723b2f60943451dc5c9a417ecdf73b2eed7c60bba1" id=26ad84e0-5ea0-4263-a5f3-9420ef0630c0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.138027512Z" level=info msg="Removed pod sandbox: a629521fd300726232807d723b2f60943451dc5c9a417ecdf73b2eed7c60bba1" id=26ad84e0-5ea0-4263-a5f3-9420ef0630c0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.138824989Z" level=info msg="Stopping pod sandbox: be1f6065f7ad6aaa8e5bde85fe76d782a368497b6036e4f75e0eda055adafb0f" id=d8f2564c-ec85-4cf4-b464-a051c10500e6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.138979089Z" level=info msg="Stopped pod sandbox (already stopped): be1f6065f7ad6aaa8e5bde85fe76d782a368497b6036e4f75e0eda055adafb0f" id=d8f2564c-ec85-4cf4-b464-a051c10500e6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.139423986Z" level=info msg="Removing pod sandbox: be1f6065f7ad6aaa8e5bde85fe76d782a368497b6036e4f75e0eda055adafb0f" id=e6098221-693c-46a3-8bbf-c31d805298d7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.148842869Z" level=info msg="Removed pod sandbox: be1f6065f7ad6aaa8e5bde85fe76d782a368497b6036e4f75e0eda055adafb0f" id=e6098221-693c-46a3-8bbf-c31d805298d7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.149316737Z" level=info msg="Stopping pod sandbox: 4cf660e3241dc2aa282aaa67b49da2fc49aa2fb7bd89322927a47e85fffab122" id=9f6cfe49-c51e-419b-a89a-6ba9e84fdb82 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.149352963Z" level=info msg="Stopped pod sandbox (already stopped): 4cf660e3241dc2aa282aaa67b49da2fc49aa2fb7bd89322927a47e85fffab122" id=9f6cfe49-c51e-419b-a89a-6ba9e84fdb82 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.149670278Z" level=info msg="Removing pod sandbox: 4cf660e3241dc2aa282aaa67b49da2fc49aa2fb7bd89322927a47e85fffab122" id=1d12455b-afaa-4e4c-99a4-3d4d8595d6a6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 17:56:59 addons-035693 crio[963]: time="2024-08-16 17:56:59.158364800Z" level=info msg="Removed pod sandbox: 4cf660e3241dc2aa282aaa67b49da2fc49aa2fb7bd89322927a47e85fffab122" id=1d12455b-afaa-4e4c-99a4-3d4d8595d6a6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 17:59:21 addons-035693 crio[963]: time="2024-08-16 17:59:21.530206700Z" level=info msg="Stopping container: f3a4ca79ab8fe87888f803ef7d7b048ec9d22cac5f4af5fc2ad9098993e55a5f (timeout: 30s)" id=e745d08e-760d-425a-8da7-9a077ab54f2b name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 17:59:22 addons-035693 crio[963]: time="2024-08-16 17:59:22.696116601Z" level=info msg="Stopped container f3a4ca79ab8fe87888f803ef7d7b048ec9d22cac5f4af5fc2ad9098993e55a5f: kube-system/metrics-server-8988944d9-ssk4x/metrics-server" id=e745d08e-760d-425a-8da7-9a077ab54f2b name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 17:59:22 addons-035693 crio[963]: time="2024-08-16 17:59:22.696994062Z" level=info msg="Stopping pod sandbox: 648e2f793ade07e40036fc21419db07c3db212350d9cb2719a8164db154b92f7" id=598b795d-851d-4be3-9364-29960a5ab22b name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 17:59:22 addons-035693 crio[963]: time="2024-08-16 17:59:22.697227882Z" level=info msg="Got pod network &{Name:metrics-server-8988944d9-ssk4x Namespace:kube-system ID:648e2f793ade07e40036fc21419db07c3db212350d9cb2719a8164db154b92f7 UID:0bdf104e-0061-4330-aaa3-3ed64ee249e7 NetNS:/var/run/netns/1c4cc315-df26-44d1-9257-fce1eb2a91e3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 16 17:59:22 addons-035693 crio[963]: time="2024-08-16 17:59:22.697377666Z" level=info msg="Deleting pod kube-system_metrics-server-8988944d9-ssk4x from CNI network \"kindnet\" (type=ptp)"
	Aug 16 17:59:22 addons-035693 crio[963]: time="2024-08-16 17:59:22.736307322Z" level=info msg="Stopped pod sandbox: 648e2f793ade07e40036fc21419db07c3db212350d9cb2719a8164db154b92f7" id=598b795d-851d-4be3-9364-29960a5ab22b name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8a7185f682c1       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   b7dfdd1af7e75       hello-world-app-55bf9c44b4-tjfkk
	32ce343b9cee4       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                         5 minutes ago       Running             nginx                     0                   bfddf0315f927       nginx
	8ca4276cff561       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   a53ed5ae8418d       busybox
	2970f6e69eeae       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        7 minutes ago       Running             local-path-provisioner    0                   357c57ee43388       local-path-provisioner-86d989889c-q9262
	f3a4ca79ab8fe       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   648e2f793ade0       metrics-server-8988944d9-ssk4x
	012e15f9a1f7c       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago       Running             coredns                   0                   2b67237d2dc49       coredns-6f6b679f8f-rbz4z
	63d1a9f6ba13e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   61293d44ba42f       storage-provisioner
	3dd59dbbe567f       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                      9 minutes ago       Running             kindnet-cni               0                   d9a660edd4ffe       kindnet-ss96t
	2f1c05f8b2d29       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                        9 minutes ago       Running             kube-proxy                0                   6c2ddff24a6f6       kube-proxy-gk9xc
	16603acf52d48       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                        9 minutes ago       Running             kube-apiserver            0                   dec70b8738945       kube-apiserver-addons-035693
	37ec5b1fb253c       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                        9 minutes ago       Running             kube-scheduler            0                   fcd75fe526843       kube-scheduler-addons-035693
	561d83fad4550       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        9 minutes ago       Running             etcd                      0                   6bda942b786e6       etcd-addons-035693
	8087f1df94210       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                        9 minutes ago       Running             kube-controller-manager   0                   f4447781e9f34       kube-controller-manager-addons-035693
	
	
	==> coredns [012e15f9a1f7c20fbe55d6e406638d4b933c25461471c3bd1ca35b27b5b45f52] <==
	[INFO] 10.244.0.2:45338 - 44082 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001699954s
	[INFO] 10.244.0.2:41294 - 38935 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081042s
	[INFO] 10.244.0.2:41294 - 38698 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000193968s
	[INFO] 10.244.0.2:52210 - 27882 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000110022s
	[INFO] 10.244.0.2:52210 - 1686 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000172487s
	[INFO] 10.244.0.2:60220 - 62726 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059413s
	[INFO] 10.244.0.2:60220 - 59652 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000036808s
	[INFO] 10.244.0.2:37554 - 59365 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048861s
	[INFO] 10.244.0.2:37554 - 61163 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003607s
	[INFO] 10.244.0.2:58464 - 869 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001352066s
	[INFO] 10.244.0.2:58464 - 6499 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001419536s
	[INFO] 10.244.0.2:35475 - 56940 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068406s
	[INFO] 10.244.0.2:35475 - 65362 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114773s
	[INFO] 10.244.0.20:58227 - 5366 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000172511s
	[INFO] 10.244.0.20:41401 - 8599 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000069079s
	[INFO] 10.244.0.20:43759 - 13969 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125784s
	[INFO] 10.244.0.20:52886 - 11569 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000098043s
	[INFO] 10.244.0.20:47274 - 34174 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000212118s
	[INFO] 10.244.0.20:53242 - 61048 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097928s
	[INFO] 10.244.0.20:33077 - 62665 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003038531s
	[INFO] 10.244.0.20:55133 - 20954 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003908852s
	[INFO] 10.244.0.20:43332 - 13158 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001371981s
	[INFO] 10.244.0.20:42221 - 58688 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001199461s
	[INFO] 10.244.0.22:48707 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000175974s
	[INFO] 10.244.0.22:42769 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122247s
	
	
	==> describe nodes <==
	Name:               addons-035693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-035693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=addons-035693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T17_49_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-035693
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:49:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-035693
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:59:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:57:08 +0000   Fri, 16 Aug 2024 17:49:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:57:08 +0000   Fri, 16 Aug 2024 17:49:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:57:08 +0000   Fri, 16 Aug 2024 17:49:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:57:08 +0000   Fri, 16 Aug 2024 17:50:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-035693
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc4014d276dd41d38ed343c7bfd38367
	  System UUID:                0d15ea8b-f4e9-4cd8-9086-0f5742e0dff3
	  Boot ID:                    42540284-5019-4b99-817b-c2e55433aff8
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  default                     hello-world-app-55bf9c44b4-tjfkk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 coredns-6f6b679f8f-rbz4z                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     9m20s
	  kube-system                 etcd-addons-035693                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9m25s
	  kube-system                 kindnet-ss96t                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m21s
	  kube-system                 kube-apiserver-addons-035693               250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m25s
	  kube-system                 kube-controller-manager-addons-035693      200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m25s
	  kube-system                 kube-proxy-gk9xc                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 kube-scheduler-addons-035693               100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m25s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  local-path-storage          local-path-provisioner-86d989889c-q9262    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m12s                  kube-proxy       
	  Normal   Starting                 9m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m32s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m32s (x8 over 9m32s)  kubelet          Node addons-035693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m32s (x8 over 9m32s)  kubelet          Node addons-035693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m32s (x7 over 9m32s)  kubelet          Node addons-035693 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m25s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m25s (x2 over 9m25s)  kubelet          Node addons-035693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m25s (x2 over 9m25s)  kubelet          Node addons-035693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m25s (x2 over 9m25s)  kubelet          Node addons-035693 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m21s                  node-controller  Node addons-035693 event: Registered Node addons-035693 in Controller
	  Normal   NodeReady                8m33s                  kubelet          Node addons-035693 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug16 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.013703] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.456008] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.059863] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002591] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017003] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004100] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003508] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.725784] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.595200] kauditd_printk_skb: 36 callbacks suppressed
	[Aug16 16:48] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug16 16:51] hrtimer: interrupt took 1350672 ns
	[Aug16 17:21] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [561d83fad4550540f02fbd655d5b88d17e2d9df95401041d8ac872122b453cea] <==
	{"level":"info","ts":"2024-08-16T17:49:51.896129Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T17:49:51.896361Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T17:49:52.148611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-16T17:49:52.148754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-16T17:49:52.148807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-16T17:49:52.148861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-16T17:49:52.148897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-16T17:49:52.148937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-16T17:49:52.148975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-16T17:49:52.154311Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-035693 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T17:49:52.154740Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:49:52.154781Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:49:52.157349Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:49:52.160671Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T17:49:52.160776Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T17:49:52.160879Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:49:52.160984Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:49:52.161042Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:49:52.154809Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:49:52.161885Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:49:52.162838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T17:49:52.165142Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-16T17:50:05.420955Z","caller":"traceutil/trace.go:171","msg":"trace[427995030] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"213.342128ms","start":"2024-08-16T17:50:05.207507Z","end":"2024-08-16T17:50:05.420850Z","steps":["trace[427995030] 'process raft request'  (duration: 207.054353ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:50:06.204366Z","caller":"traceutil/trace.go:171","msg":"trace[1409747860] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"272.602168ms","start":"2024-08-16T17:50:05.931749Z","end":"2024-08-16T17:50:06.204351Z","steps":["trace[1409747860] 'process raft request'  (duration: 272.477033ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:50:07.416840Z","caller":"traceutil/trace.go:171","msg":"trace[2140881632] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"117.650046ms","start":"2024-08-16T17:50:07.299169Z","end":"2024-08-16T17:50:07.416820Z","steps":["trace[2140881632] 'process raft request'  (duration: 55.75318ms)","trace[2140881632] 'compare'  (duration: 51.745647ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:59:23 up  1:41,  0 users,  load average: 0.43, 0.80, 1.65
	Linux addons-035693 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3dd59dbbe567f2d245e345801144eab8a3071ac9373f1207ba120ca78d114fac] <==
	I0816 17:58:09.934032       1 main.go:299] handling current node
	I0816 17:58:19.933039       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:58:19.933095       1 main.go:299] handling current node
	W0816 17:58:22.622205       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 17:58:22.622315       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0816 17:58:26.756457       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:58:26.756495       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0816 17:58:29.933239       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:58:29.933276       1 main.go:299] handling current node
	W0816 17:58:31.918611       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:58:31.918652       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0816 17:58:39.933844       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:58:39.933881       1 main.go:299] handling current node
	I0816 17:58:49.933285       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:58:49.933399       1 main.go:299] handling current node
	W0816 17:58:57.169853       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 17:58:57.169892       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0816 17:58:59.933550       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:58:59.933586       1 main.go:299] handling current node
	W0816 17:59:06.262022       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:59:06.262169       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0816 17:59:09.933570       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:59:09.933605       1 main.go:299] handling current node
	I0816 17:59:19.933135       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:59:19.933172       1 main.go:299] handling current node
	
	
	==> kube-apiserver [16603acf52d4858a0a4d045945e3e83452b0785af12a9c502ed0dd330665f1f2] <==
	E0816 17:52:10.913782       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.107.224:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.107.224:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.107.224:443: connect: connection refused" logger="UnhandledError"
	I0816 17:52:11.021316       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0816 17:52:54.195348       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43404: use of closed network connection
	E0816 17:52:54.355261       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43434: use of closed network connection
	I0816 17:53:36.265906       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0816 17:53:46.288399       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.79.244"}
	I0816 17:53:57.813291       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 17:53:57.813342       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 17:53:57.855480       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 17:53:57.855536       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 17:53:57.941515       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 17:53:57.941727       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 17:53:58.013443       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 17:53:58.013509       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 17:53:58.036329       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 17:53:58.036697       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0816 17:53:59.014149       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0816 17:53:59.037371       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0816 17:53:59.166301       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0816 17:54:09.974518       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0816 17:54:11.002766       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0816 17:54:15.639164       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0816 17:54:15.948556       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.63.102"}
	I0816 17:56:36.879946       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.165.101"}
	E0816 17:56:39.359359       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [8087f1df942107045e526a7dfbc35384f7446526d7df5f84e96245b5d262797e] <==
	W0816 17:57:23.270693       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:57:23.270737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:57:24.039395       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:57:24.039448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:57:34.122112       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:57:34.122159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:57:48.079823       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:57:48.079976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:58:04.225358       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:58:04.225400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:58:18.143306       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:58:18.143353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:58:33.152392       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:58:33.152440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:58:42.797508       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:58:42.797558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:58:56.112318       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:58:56.112363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:59:07.283200       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:59:07.283244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 17:59:07.794731       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:59:07.794775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 17:59:21.492602       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="7.589µs"
	W0816 17:59:23.014655       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 17:59:23.014711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [2f1c05f8b2d294f719f8e0c5743ba9d8f84ba8c744333bcda3dbb51a7b9ee1a3] <==
	I0816 17:50:09.285955       1 server_linux.go:66] "Using iptables proxy"
	I0816 17:50:10.422616       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0816 17:50:10.422687       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:50:10.765993       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0816 17:50:10.766078       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:50:10.778956       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:50:10.789207       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:50:10.789332       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:50:10.793126       1 config.go:197] "Starting service config controller"
	I0816 17:50:10.793247       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:50:10.793327       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:50:10.793370       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:50:10.793907       1 config.go:326] "Starting node config controller"
	I0816 17:50:10.793974       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:50:10.894446       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:50:10.894509       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:50:10.894536       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [37ec5b1fb253cf065389c9039cfa8673b05005ab8c510d306aa665b63d6d68db] <==
	W0816 17:49:56.006825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:49:56.017124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.007107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 17:49:56.017216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.007158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:49:56.017335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.007222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:49:56.017427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.017697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:49:56.017761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.017874       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:49:56.017921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.018037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 17:49:56.018082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.018190       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:49:56.018254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.893453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 17:49:56.893499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.950082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:49:56.950126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:56.981236       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:49:56.981286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:49:57.037808       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:49:57.037939       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:49:58.988972       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 17:58:18 addons-035693 kubelet[1482]: E0816 17:58:18.861240    1482 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831098860976435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:58:18 addons-035693 kubelet[1482]: E0816 17:58:18.861276    1482 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831098860976435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:58:28 addons-035693 kubelet[1482]: E0816 17:58:28.863332    1482 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831108863104043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:58:28 addons-035693 kubelet[1482]: E0816 17:58:28.863368    1482 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831108863104043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:58:38 addons-035693 kubelet[1482]: E0816 17:58:38.865641    1482 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831118865409074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:58:38 addons-035693 kubelet[1482]: E0816 17:58:38.865676    1482 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831118865409074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:58:48 addons-035693 kubelet[1482]: E0816 17:58:48.868857    1482 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831128868627582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:58:48 addons-035693 kubelet[1482]: E0816 17:58:48.868893    1482 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831128868627582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:58:54 addons-035693 kubelet[1482]: I0816 17:58:54.690332    1482 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 16 17:58:58 addons-035693 kubelet[1482]: E0816 17:58:58.872081    1482 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831138871824715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:58:58 addons-035693 kubelet[1482]: E0816 17:58:58.872121    1482 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831138871824715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:59:08 addons-035693 kubelet[1482]: E0816 17:59:08.875242    1482 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831148875000342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:59:08 addons-035693 kubelet[1482]: E0816 17:59:08.875281    1482 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831148875000342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:59:18 addons-035693 kubelet[1482]: E0816 17:59:18.877784    1482 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831158877542121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:59:18 addons-035693 kubelet[1482]: E0816 17:59:18.877822    1482 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831158877542121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:59:22 addons-035693 kubelet[1482]: I0816 17:59:22.837927    1482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0bdf104e-0061-4330-aaa3-3ed64ee249e7-tmp-dir\") pod \"0bdf104e-0061-4330-aaa3-3ed64ee249e7\" (UID: \"0bdf104e-0061-4330-aaa3-3ed64ee249e7\") "
	Aug 16 17:59:22 addons-035693 kubelet[1482]: I0816 17:59:22.837978    1482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29z6w\" (UniqueName: \"kubernetes.io/projected/0bdf104e-0061-4330-aaa3-3ed64ee249e7-kube-api-access-29z6w\") pod \"0bdf104e-0061-4330-aaa3-3ed64ee249e7\" (UID: \"0bdf104e-0061-4330-aaa3-3ed64ee249e7\") "
	Aug 16 17:59:22 addons-035693 kubelet[1482]: I0816 17:59:22.838279    1482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bdf104e-0061-4330-aaa3-3ed64ee249e7-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "0bdf104e-0061-4330-aaa3-3ed64ee249e7" (UID: "0bdf104e-0061-4330-aaa3-3ed64ee249e7"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 16 17:59:22 addons-035693 kubelet[1482]: I0816 17:59:22.841533    1482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdf104e-0061-4330-aaa3-3ed64ee249e7-kube-api-access-29z6w" (OuterVolumeSpecName: "kube-api-access-29z6w") pod "0bdf104e-0061-4330-aaa3-3ed64ee249e7" (UID: "0bdf104e-0061-4330-aaa3-3ed64ee249e7"). InnerVolumeSpecName "kube-api-access-29z6w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 17:59:22 addons-035693 kubelet[1482]: I0816 17:59:22.885189    1482 scope.go:117] "RemoveContainer" containerID="f3a4ca79ab8fe87888f803ef7d7b048ec9d22cac5f4af5fc2ad9098993e55a5f"
	Aug 16 17:59:22 addons-035693 kubelet[1482]: I0816 17:59:22.909170    1482 scope.go:117] "RemoveContainer" containerID="f3a4ca79ab8fe87888f803ef7d7b048ec9d22cac5f4af5fc2ad9098993e55a5f"
	Aug 16 17:59:22 addons-035693 kubelet[1482]: E0816 17:59:22.909554    1482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a4ca79ab8fe87888f803ef7d7b048ec9d22cac5f4af5fc2ad9098993e55a5f\": container with ID starting with f3a4ca79ab8fe87888f803ef7d7b048ec9d22cac5f4af5fc2ad9098993e55a5f not found: ID does not exist" containerID="f3a4ca79ab8fe87888f803ef7d7b048ec9d22cac5f4af5fc2ad9098993e55a5f"
	Aug 16 17:59:22 addons-035693 kubelet[1482]: I0816 17:59:22.909594    1482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a4ca79ab8fe87888f803ef7d7b048ec9d22cac5f4af5fc2ad9098993e55a5f"} err="failed to get container status \"f3a4ca79ab8fe87888f803ef7d7b048ec9d22cac5f4af5fc2ad9098993e55a5f\": rpc error: code = NotFound desc = could not find container \"f3a4ca79ab8fe87888f803ef7d7b048ec9d22cac5f4af5fc2ad9098993e55a5f\": container with ID starting with f3a4ca79ab8fe87888f803ef7d7b048ec9d22cac5f4af5fc2ad9098993e55a5f not found: ID does not exist"
	Aug 16 17:59:22 addons-035693 kubelet[1482]: I0816 17:59:22.939095    1482 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0bdf104e-0061-4330-aaa3-3ed64ee249e7-tmp-dir\") on node \"addons-035693\" DevicePath \"\""
	Aug 16 17:59:22 addons-035693 kubelet[1482]: I0816 17:59:22.939133    1482 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-29z6w\" (UniqueName: \"kubernetes.io/projected/0bdf104e-0061-4330-aaa3-3ed64ee249e7-kube-api-access-29z6w\") on node \"addons-035693\" DevicePath \"\""
	
	
	==> storage-provisioner [63d1a9f6ba13e8a852d2e9c0f42a6ffe70f60f0cdd413245f71726d5791c7fff] <==
	I0816 17:50:51.313414       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 17:50:51.328685       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 17:50:51.328735       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 17:50:51.336716       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 17:50:51.339869       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-035693_dbfaaf44-207e-4f5f-839a-e775aa43bdeb!
	I0816 17:50:51.337244       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2627d9f4-4c1e-47dc-8d77-6117f68f8057", APIVersion:"v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-035693_dbfaaf44-207e-4f5f-839a-e775aa43bdeb became leader
	I0816 17:50:51.440116       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-035693_dbfaaf44-207e-4f5f-839a-e775aa43bdeb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-035693 -n addons-035693
helpers_test.go:261: (dbg) Run:  kubectl --context addons-035693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (325.61s)

                                                
                                    

Test pass (296/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.12
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.4
9 TestDownloadOnly/v1.20.0/DeleteAll 0.37
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.31.0/json-events 9.26
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 212.71
31 TestAddons/serial/GCPAuth/Namespaces 0.32
33 TestAddons/parallel/Registry 15.45
35 TestAddons/parallel/InspektorGadget 11.91
39 TestAddons/parallel/CSI 43.66
40 TestAddons/parallel/Headlamp 17.99
41 TestAddons/parallel/CloudSpanner 6.79
42 TestAddons/parallel/LocalPath 13.68
43 TestAddons/parallel/NvidiaDevicePlugin 6.55
44 TestAddons/parallel/Yakd 11.81
45 TestAddons/StoppedEnableDisable 12.2
46 TestCertOptions 33.64
47 TestCertExpiration 237.53
49 TestForceSystemdFlag 40.17
50 TestForceSystemdEnv 34.69
56 TestErrorSpam/setup 31.5
57 TestErrorSpam/start 0.83
58 TestErrorSpam/status 1.09
59 TestErrorSpam/pause 1.78
60 TestErrorSpam/unpause 1.76
61 TestErrorSpam/stop 1.44
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 48.11
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 17.32
68 TestFunctional/serial/KubeContext 0.1
69 TestFunctional/serial/KubectlGetPods 0.13
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.33
73 TestFunctional/serial/CacheCmd/cache/add_local 1.42
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
78 TestFunctional/serial/CacheCmd/cache/delete 0.17
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 33.26
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.7
84 TestFunctional/serial/LogsFileCmd 1.71
85 TestFunctional/serial/InvalidService 4.15
87 TestFunctional/parallel/ConfigCmd 0.47
88 TestFunctional/parallel/DashboardCmd 11.11
89 TestFunctional/parallel/DryRun 0.44
90 TestFunctional/parallel/InternationalLanguage 0.19
91 TestFunctional/parallel/StatusCmd 1
95 TestFunctional/parallel/ServiceCmdConnect 10.58
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 27.09
99 TestFunctional/parallel/SSHCmd 0.7
100 TestFunctional/parallel/CpCmd 1.64
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.62
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.79
111 TestFunctional/parallel/License 0.29
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 1.16
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.44
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
123 TestFunctional/parallel/ImageCommands/ImageBuild 2.87
124 TestFunctional/parallel/ImageCommands/Setup 0.73
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.46
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.95
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.22
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.83
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.66
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/MountCmd/any-port 8.32
142 TestFunctional/parallel/MountCmd/specific-port 2.13
143 TestFunctional/parallel/MountCmd/VerifyCleanup 2.63
144 TestFunctional/parallel/ServiceCmd/DeployApp 8.26
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
146 TestFunctional/parallel/ProfileCmd/profile_list 0.39
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
148 TestFunctional/parallel/ServiceCmd/List 1.44
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.43
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
151 TestFunctional/parallel/ServiceCmd/Format 0.57
152 TestFunctional/parallel/ServiceCmd/URL 0.55
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 176.12
160 TestMultiControlPlane/serial/DeployApp 6.65
161 TestMultiControlPlane/serial/PingHostFromPods 1.64
162 TestMultiControlPlane/serial/AddWorkerNode 35.27
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.73
165 TestMultiControlPlane/serial/CopyFile 18.98
166 TestMultiControlPlane/serial/StopSecondaryNode 12.79
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
168 TestMultiControlPlane/serial/RestartSecondaryNode 31.66
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 5.05
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 144.38
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.99
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
173 TestMultiControlPlane/serial/StopCluster 35.87
174 TestMultiControlPlane/serial/RestartCluster 104.15
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
176 TestMultiControlPlane/serial/AddSecondaryNode 75.52
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
181 TestJSONOutput/start/Command 51.66
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.77
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.67
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.98
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 42.26
207 TestKicCustomNetwork/use_default_bridge_network 33.41
208 TestKicExistingNetwork 34.51
209 TestKicCustomSubnet 34.75
210 TestKicStaticIP 32.94
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 70.39
215 TestMountStart/serial/StartWithMountFirst 7.33
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 6.85
218 TestMountStart/serial/VerifyMountSecond 0.27
219 TestMountStart/serial/DeleteFirst 2.12
220 TestMountStart/serial/VerifyMountPostDelete 0.28
221 TestMountStart/serial/Stop 1.22
222 TestMountStart/serial/RestartStopped 7.82
223 TestMountStart/serial/VerifyMountPostStop 0.28
226 TestMultiNode/serial/FreshStart2Nodes 78.21
227 TestMultiNode/serial/DeployApp2Nodes 5.06
228 TestMultiNode/serial/PingHostFrom2Pods 1
229 TestMultiNode/serial/AddNode 29
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.36
232 TestMultiNode/serial/CopyFile 10.02
233 TestMultiNode/serial/StopNode 2.26
234 TestMultiNode/serial/StartAfterStop 10.1
235 TestMultiNode/serial/RestartKeepsNodes 98.3
236 TestMultiNode/serial/DeleteNode 5.61
237 TestMultiNode/serial/StopMultiNode 23.82
238 TestMultiNode/serial/RestartMultiNode 56.3
239 TestMultiNode/serial/ValidateNameConflict 34.87
244 TestPreload 124.38
246 TestScheduledStopUnix 104.91
249 TestInsufficientStorage 13.53
250 TestRunningBinaryUpgrade 63.84
252 TestKubernetesUpgrade 467.09
253 TestMissingContainerUpgrade 170.7
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 42.4
257 TestNoKubernetes/serial/StartWithStopK8s 9.15
258 TestNoKubernetes/serial/Start 9.17
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
260 TestNoKubernetes/serial/ProfileList 1.08
261 TestNoKubernetes/serial/Stop 1.27
262 TestNoKubernetes/serial/StartNoArgs 7.05
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
264 TestStoppedBinaryUpgrade/Setup 0.93
265 TestStoppedBinaryUpgrade/Upgrade 74.56
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
275 TestPause/serial/Start 53.36
276 TestPause/serial/SecondStartNoReconfiguration 23.76
277 TestPause/serial/Pause 0.92
278 TestPause/serial/VerifyStatus 0.44
279 TestPause/serial/Unpause 0.86
280 TestPause/serial/PauseAgain 0.84
281 TestPause/serial/DeletePaused 2.84
282 TestPause/serial/VerifyDeletedResources 0.35
290 TestNetworkPlugins/group/false 3.55
295 TestStartStop/group/old-k8s-version/serial/FirstStart 148.72
297 TestStartStop/group/no-preload/serial/FirstStart 70.26
298 TestStartStop/group/old-k8s-version/serial/DeployApp 9.11
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.55
300 TestStartStop/group/old-k8s-version/serial/Stop 13.54
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
302 TestStartStop/group/old-k8s-version/serial/SecondStart 144.16
303 TestStartStop/group/no-preload/serial/DeployApp 10.52
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.62
305 TestStartStop/group/no-preload/serial/Stop 12.3
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 280.09
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
311 TestStartStop/group/old-k8s-version/serial/Pause 3.07
313 TestStartStop/group/embed-certs/serial/FirstStart 52.61
314 TestStartStop/group/embed-certs/serial/DeployApp 10.33
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
316 TestStartStop/group/embed-certs/serial/Stop 11.94
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
318 TestStartStop/group/embed-certs/serial/SecondStart 297.38
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
322 TestStartStop/group/no-preload/serial/Pause 3.2
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 48.95
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.56
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.19
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 302.65
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
333 TestStartStop/group/embed-certs/serial/Pause 3.04
335 TestStartStop/group/newest-cni/serial/FirstStart 34.43
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.4
338 TestStartStop/group/newest-cni/serial/Stop 1.29
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
340 TestStartStop/group/newest-cni/serial/SecondStart 16.81
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
344 TestStartStop/group/newest-cni/serial/Pause 3.03
345 TestNetworkPlugins/group/auto/Start 53.56
346 TestNetworkPlugins/group/auto/KubeletFlags 0.32
347 TestNetworkPlugins/group/auto/NetCatPod 10.3
348 TestNetworkPlugins/group/auto/DNS 0.19
349 TestNetworkPlugins/group/auto/Localhost 0.16
350 TestNetworkPlugins/group/auto/HairPin 0.18
351 TestNetworkPlugins/group/kindnet/Start 54.16
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.14
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.33
359 TestNetworkPlugins/group/calico/Start 69.55
360 TestNetworkPlugins/group/kindnet/DNS 0.26
361 TestNetworkPlugins/group/kindnet/Localhost 0.19
362 TestNetworkPlugins/group/kindnet/HairPin 0.27
363 TestNetworkPlugins/group/custom-flannel/Start 60.28
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.31
366 TestNetworkPlugins/group/calico/NetCatPod 15.32
367 TestNetworkPlugins/group/calico/DNS 0.22
368 TestNetworkPlugins/group/calico/Localhost 0.15
369 TestNetworkPlugins/group/calico/HairPin 0.18
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.4
372 TestNetworkPlugins/group/custom-flannel/DNS 0.23
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
375 TestNetworkPlugins/group/enable-default-cni/Start 51.37
376 TestNetworkPlugins/group/flannel/Start 58.93
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/bridge/Start 84.94
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
385 TestNetworkPlugins/group/flannel/NetCatPod 13.35
386 TestNetworkPlugins/group/flannel/DNS 0.22
387 TestNetworkPlugins/group/flannel/Localhost 0.2
388 TestNetworkPlugins/group/flannel/HairPin 0.21
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 11.26
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.16
393 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (10.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-936236 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-936236 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.119961101s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-936236
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-936236: exit status 85 (396.954855ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-936236 | jenkins | v1.33.1 | 16 Aug 24 17:48 UTC |          |
	|         | -p download-only-936236        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:48:50
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:48:50.159059  284288 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:48:50.159501  284288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:48:50.159514  284288 out.go:358] Setting ErrFile to fd 2...
	I0816 17:48:50.159520  284288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:48:50.159789  284288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
	W0816 17:48:50.159938  284288 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19461-278896/.minikube/config/config.json: open /home/jenkins/minikube-integration/19461-278896/.minikube/config/config.json: no such file or directory
	I0816 17:48:50.160348  284288 out.go:352] Setting JSON to true
	I0816 17:48:50.161207  284288 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5479,"bootTime":1723825052,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 17:48:50.161282  284288 start.go:139] virtualization:  
	I0816 17:48:50.164317  284288 out.go:97] [download-only-936236] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0816 17:48:50.164604  284288 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19461-278896/.minikube/cache/preloaded-tarball: no such file or directory
	I0816 17:48:50.164655  284288 notify.go:220] Checking for updates...
	I0816 17:48:50.166259  284288 out.go:169] MINIKUBE_LOCATION=19461
	I0816 17:48:50.168263  284288 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:48:50.170252  284288 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	I0816 17:48:50.172170  284288 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	I0816 17:48:50.174310  284288 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0816 17:48:50.178321  284288 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 17:48:50.178592  284288 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:48:50.205710  284288 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 17:48:50.205821  284288 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:48:50.260140  284288 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 17:48:50.250554437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:48:50.260256  284288 docker.go:307] overlay module found
	I0816 17:48:50.262091  284288 out.go:97] Using the docker driver based on user configuration
	I0816 17:48:50.262125  284288 start.go:297] selected driver: docker
	I0816 17:48:50.262132  284288 start.go:901] validating driver "docker" against <nil>
	I0816 17:48:50.262252  284288 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:48:50.316309  284288 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 17:48:50.306841728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:48:50.316474  284288 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 17:48:50.316784  284288 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0816 17:48:50.316941  284288 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 17:48:50.319033  284288 out.go:169] Using Docker driver with root privileges
	I0816 17:48:50.320935  284288 cni.go:84] Creating CNI manager for ""
	I0816 17:48:50.320961  284288 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 17:48:50.320973  284288 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 17:48:50.321047  284288 start.go:340] cluster config:
	{Name:download-only-936236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-936236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:48:50.322941  284288 out.go:97] Starting "download-only-936236" primary control-plane node in "download-only-936236" cluster
	I0816 17:48:50.322964  284288 cache.go:121] Beginning downloading kic base image for docker with crio
	I0816 17:48:50.325189  284288 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0816 17:48:50.325226  284288 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 17:48:50.325394  284288 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0816 17:48:50.340510  284288 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0816 17:48:50.340749  284288 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0816 17:48:50.340850  284288 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0816 17:48:50.377115  284288 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0816 17:48:50.377155  284288 cache.go:56] Caching tarball of preloaded images
	I0816 17:48:50.377760  284288 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 17:48:50.380222  284288 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0816 17:48:50.380246  284288 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0816 17:48:50.465481  284288 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19461-278896/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0816 17:48:54.856877  284288 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	
	
	* The control-plane node download-only-936236 host does not exist
	  To start a cluster, run: "minikube start -p download-only-936236"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-936236
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (9.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-628669 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-628669 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.261634234s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (9.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-628669
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-628669: exit status 85 (74.014115ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-936236 | jenkins | v1.33.1 | 16 Aug 24 17:48 UTC |                     |
	|         | -p download-only-936236        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC | 16 Aug 24 17:49 UTC |
	| delete  | -p download-only-936236        | download-only-936236 | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC | 16 Aug 24 17:49 UTC |
	| start   | -o=json --download-only        | download-only-628669 | jenkins | v1.33.1 | 16 Aug 24 17:49 UTC |                     |
	|         | -p download-only-628669        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:49:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:49:01.270010  284489 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:49:01.270221  284489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:49:01.270235  284489 out.go:358] Setting ErrFile to fd 2...
	I0816 17:49:01.270241  284489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:49:01.270519  284489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
	I0816 17:49:01.270953  284489 out.go:352] Setting JSON to true
	I0816 17:49:01.271844  284489 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5490,"bootTime":1723825052,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 17:49:01.271924  284489 start.go:139] virtualization:  
	I0816 17:49:01.317182  284489 out.go:97] [download-only-628669] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 17:49:01.317515  284489 notify.go:220] Checking for updates...
	I0816 17:49:01.349530  284489 out.go:169] MINIKUBE_LOCATION=19461
	I0816 17:49:01.383437  284489 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:49:01.389218  284489 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	I0816 17:49:01.391507  284489 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	I0816 17:49:01.396533  284489 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0816 17:49:01.412487  284489 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 17:49:01.412815  284489 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:49:01.434660  284489 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 17:49:01.434782  284489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:49:01.489168  284489 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 17:49:01.479543759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:49:01.489288  284489 docker.go:307] overlay module found
	I0816 17:49:01.491699  284489 out.go:97] Using the docker driver based on user configuration
	I0816 17:49:01.491731  284489 start.go:297] selected driver: docker
	I0816 17:49:01.491738  284489 start.go:901] validating driver "docker" against <nil>
	I0816 17:49:01.491867  284489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:49:01.544141  284489 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 17:49:01.534495717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:49:01.544307  284489 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 17:49:01.544654  284489 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0816 17:49:01.544815  284489 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 17:49:01.547136  284489 out.go:169] Using Docker driver with root privileges
	I0816 17:49:01.549099  284489 cni.go:84] Creating CNI manager for ""
	I0816 17:49:01.549121  284489 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 17:49:01.549140  284489 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 17:49:01.549222  284489 start.go:340] cluster config:
	{Name:download-only-628669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-628669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:49:01.551208  284489 out.go:97] Starting "download-only-628669" primary control-plane node in "download-only-628669" cluster
	I0816 17:49:01.551238  284489 cache.go:121] Beginning downloading kic base image for docker with crio
	I0816 17:49:01.553237  284489 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0816 17:49:01.553268  284489 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:49:01.553438  284489 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0816 17:49:01.569254  284489 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0816 17:49:01.569391  284489 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0816 17:49:01.569418  284489 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0816 17:49:01.569427  284489 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0816 17:49:01.569436  284489 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0816 17:49:01.611752  284489 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0816 17:49:01.611779  284489 cache.go:56] Caching tarball of preloaded images
	I0816 17:49:01.612624  284489 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:49:01.615433  284489 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0816 17:49:01.615462  284489 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 ...
	I0816 17:49:01.709340  284489 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e6af375765e1700a37be5f07489fb80e -> /home/jenkins/minikube-integration/19461-278896/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-628669 host does not exist
	  To start a cluster, run: "minikube start -p download-only-628669"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-628669
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-205704 --alsologtostderr --binary-mirror http://127.0.0.1:41837 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-205704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-205704
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-035693
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-035693: exit status 85 (90.255463ms)

                                                
                                                
-- stdout --
	* Profile "addons-035693" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-035693"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-035693
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-035693: exit status 85 (65.795169ms)

                                                
                                                
-- stdout --
	* Profile "addons-035693" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-035693"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (212.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-035693 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-035693 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m32.706910851s)
--- PASS: TestAddons/Setup (212.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-035693 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-035693 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.32s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 5.884818ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-tm8w6" [7a1098d6-9eed-44ed-b050-d7eb7f621f53] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004370183s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-t2nrw" [b85d1b9d-5cbc-4b35-a578-9eb458257f07] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003903845s
addons_test.go:342: (dbg) Run:  kubectl --context addons-035693 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-035693 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-035693 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.474920245s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 ip
2024/08/16 17:53:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.45s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-g74h9" [d6293228-b70a-4bfa-9e9d-c5ed3bd26ed2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005030029s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-035693
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-035693: (5.900288066s)
--- PASS: TestAddons/parallel/InspektorGadget (11.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.463717ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-035693 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-035693 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [448adb00-0c24-4429-9683-3a4232a5c3e3] Pending
helpers_test.go:344: "task-pv-pod" [448adb00-0c24-4429-9683-3a4232a5c3e3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [448adb00-0c24-4429-9683-3a4232a5c3e3] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003868244s
addons_test.go:590: (dbg) Run:  kubectl --context addons-035693 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-035693 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-035693 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-035693 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-035693 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-035693 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-035693 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [928d9706-cf50-4e61-8ae4-1e2ea4d3cc1e] Pending
helpers_test.go:344: "task-pv-pod-restore" [928d9706-cf50-4e61-8ae4-1e2ea4d3cc1e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [928d9706-cf50-4e61-8ae4-1e2ea4d3cc1e] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005128099s
addons_test.go:632: (dbg) Run:  kubectl --context addons-035693 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-035693 delete pod task-pv-pod-restore: (2.525006391s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-035693 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-035693 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-035693 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.77474733s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-035693 addons disable volumesnapshots --alsologtostderr -v=1: (1.149583869s)
--- PASS: TestAddons/parallel/CSI (43.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-035693 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-vxppd" [36f73ae6-1393-4b6c-8396-32b81f97d881] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-vxppd" [36f73ae6-1393-4b6c-8396-32b81f97d881] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003844276s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-035693 addons disable headlamp --alsologtostderr -v=1: (6.05767029s)
--- PASS: TestAddons/parallel/Headlamp (17.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-2m4j4" [f5c4d2fb-88b8-44c3-82a4-f5e0026b6349] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003626441s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-035693
--- PASS: TestAddons/parallel/CloudSpanner (6.79s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.68s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-035693 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-035693 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-035693 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b18f9af7-ba9d-4347-991c-464f5147a4eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b18f9af7-ba9d-4347-991c-464f5147a4eb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b18f9af7-ba9d-4347-991c-464f5147a4eb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005039414s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-035693 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 ssh "cat /opt/local-path-provisioner/pvc-8f665f0d-7f70-4b2f-b5f6-7d515479e3bb_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-035693 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-035693 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.68s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jsx2r" [c4f0b8cd-7cfb-4b35-b194-ec9b1febfd6b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004941865s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-035693
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pflsj" [71fb92b6-6623-40e2-9f46-457f26af970d] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004026768s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-035693 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-035693 addons disable yakd --alsologtostderr -v=1: (5.808815208s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-035693
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-035693: (11.926030771s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-035693
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-035693
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-035693
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (33.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-000639 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-000639 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.754219925s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-000639 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-000639 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-000639 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-000639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-000639
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-000639: (2.135933577s)
--- PASS: TestCertOptions (33.64s)

                                                
                                    
x
+
TestCertExpiration (237.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-740608 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-740608 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.433729018s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-740608 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-740608 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.600452477s)
helpers_test.go:175: Cleaning up "cert-expiration-740608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-740608
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-740608: (2.496475298s)
--- PASS: TestCertExpiration (237.53s)

                                                
                                    
x
+
TestForceSystemdFlag (40.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-899827 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-899827 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.351839854s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-899827 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-899827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-899827
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-899827: (2.521955052s)
--- PASS: TestForceSystemdFlag (40.17s)

                                                
                                    
x
+
TestForceSystemdEnv (34.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-158648 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0816 18:37:45.770990  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-158648 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.263407467s)
helpers_test.go:175: Cleaning up "force-systemd-env-158648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-158648
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-158648: (2.429914778s)
--- PASS: TestForceSystemdEnv (34.69s)

                                                
                                    
x
+
TestErrorSpam/setup (31.5s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-212077 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-212077 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-212077 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-212077 --driver=docker  --container-runtime=crio: (31.494924107s)
--- PASS: TestErrorSpam/setup (31.50s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 stop: (1.23587013s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212077 --log_dir /tmp/nospam-212077 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19461-278896/.minikube/files/etc/test/nested/copy/284283/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-741792 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-741792 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (48.110672449s)
--- PASS: TestFunctional/serial/StartWithProxy (48.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (17.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-741792 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-741792 --alsologtostderr -v=8: (17.316108657s)
functional_test.go:663: soft start took 17.316657465s for "functional-741792" cluster.
--- PASS: TestFunctional/serial/SoftStart (17.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-741792 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 cache add registry.k8s.io/pause:3.1: (1.541805471s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 cache add registry.k8s.io/pause:3.3: (1.418045138s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 cache add registry.k8s.io/pause:latest: (1.366839296s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-741792 /tmp/TestFunctionalserialCacheCmdcacheadd_local2969264328/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 cache add minikube-local-cache-test:functional-741792
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 cache delete minikube-local-cache-test:functional-741792
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-741792
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-741792 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.209329ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 cache reload: (1.203701152s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 kubectl -- --context functional-741792 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-741792 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-741792 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-741792 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.26132087s)
functional_test.go:761: restart took 33.261436568s for "functional-741792" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-741792 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 logs: (1.69636794s)
--- PASS: TestFunctional/serial/LogsCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 logs --file /tmp/TestFunctionalserialLogsFileCmd3360671942/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 logs --file /tmp/TestFunctionalserialLogsFileCmd3360671942/001/logs.txt: (1.709034082s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-741792 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-741792
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-741792: exit status 115 (532.64126ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31668 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-741792 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-741792 config get cpus: exit status 14 (54.199981ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-741792 config get cpus: exit status 14 (111.980826ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-741792 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-741792 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 313053: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-741792 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-741792 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (208.692455ms)

                                                
                                                
-- stdout --
	* [functional-741792] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:03:02.994248  312551 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:03:02.994426  312551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:03:02.994440  312551 out.go:358] Setting ErrFile to fd 2...
	I0816 18:03:02.994447  312551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:03:02.994703  312551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
	I0816 18:03:02.995101  312551 out.go:352] Setting JSON to false
	I0816 18:03:02.996079  312551 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6331,"bootTime":1723825052,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 18:03:02.996161  312551 start.go:139] virtualization:  
	I0816 18:03:03.000878  312551 out.go:177] * [functional-741792] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 18:03:03.003739  312551 notify.go:220] Checking for updates...
	I0816 18:03:03.007603  312551 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:03:03.015458  312551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:03:03.019222  312551 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	I0816 18:03:03.021674  312551 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	I0816 18:03:03.023939  312551 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 18:03:03.026760  312551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:03:03.029607  312551 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:03:03.030152  312551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:03:03.055428  312551 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 18:03:03.055554  312551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 18:03:03.141582  312551 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 18:03:03.131279744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 18:03:03.141702  312551 docker.go:307] overlay module found
	I0816 18:03:03.144559  312551 out.go:177] * Using the docker driver based on existing profile
	I0816 18:03:03.146647  312551 start.go:297] selected driver: docker
	I0816 18:03:03.146676  312551 start.go:901] validating driver "docker" against &{Name:functional-741792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-741792 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:03:03.146826  312551 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:03:03.149599  312551 out.go:201] 
	W0816 18:03:03.151276  312551 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 18:03:03.152978  312551 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-741792 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-741792 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-741792 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.275591ms)

                                                
                                                
-- stdout --
	* [functional-741792] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:03:04.439794  312879 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:03:04.439989  312879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:03:04.440002  312879 out.go:358] Setting ErrFile to fd 2...
	I0816 18:03:04.440008  312879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:03:04.440395  312879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
	I0816 18:03:04.440834  312879 out.go:352] Setting JSON to false
	I0816 18:03:04.441822  312879 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6333,"bootTime":1723825052,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 18:03:04.441901  312879 start.go:139] virtualization:  
	I0816 18:03:04.445564  312879 out.go:177] * [functional-741792] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0816 18:03:04.447478  312879 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:03:04.447597  312879 notify.go:220] Checking for updates...
	I0816 18:03:04.451125  312879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:03:04.453140  312879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	I0816 18:03:04.454909  312879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	I0816 18:03:04.456653  312879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 18:03:04.458282  312879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:03:04.460405  312879 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:03:04.461019  312879 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:03:04.492738  312879 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 18:03:04.492850  312879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 18:03:04.566721  312879 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 18:03:04.555656428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 18:03:04.566843  312879 docker.go:307] overlay module found
	I0816 18:03:04.569031  312879 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0816 18:03:04.570924  312879 start.go:297] selected driver: docker
	I0816 18:03:04.570948  312879 start.go:901] validating driver "docker" against &{Name:functional-741792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-741792 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:03:04.571056  312879 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:03:04.573705  312879 out.go:201] 
	W0816 18:03:04.575239  312879 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 18:03:04.576759  312879 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-741792 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-741792 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-6pfxb" [0b6ef1c9-d35a-423c-9c78-ce8996c7456e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-6pfxb" [0b6ef1c9-d35a-423c-9c78-ce8996c7456e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.005253926s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32255
functional_test.go:1675: http://192.168.49.2:32255: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-6pfxb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32255
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [946acc61-d1f0-4d4c-8785-f571264c1eef] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004143443s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-741792 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-741792 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-741792 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-741792 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d53e0c12-d53a-4095-ab47-688f7b352e5c] Pending
helpers_test.go:344: "sp-pod" [d53e0c12-d53a-4095-ab47-688f7b352e5c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d53e0c12-d53a-4095-ab47-688f7b352e5c] Running
E0816 18:02:48.340669  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:02:50.902983  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004081181s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-741792 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-741792 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-741792 delete -f testdata/storage-provisioner/pod.yaml: (1.097114058s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-741792 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [79617f21-76ac-4e4f-aacd-b0d7720b825e] Pending
E0816 18:02:56.025168  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [79617f21-76ac-4e4f-aacd-b0d7720b825e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004471757s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-741792 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.09s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh -n functional-741792 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 cp functional-741792:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1103643608/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh -n functional-741792 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh -n functional-741792 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/284283/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "sudo cat /etc/test/nested/copy/284283/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/284283.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "sudo cat /etc/ssl/certs/284283.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/284283.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "sudo cat /usr/share/ca-certificates/284283.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2842832.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "sudo cat /etc/ssl/certs/2842832.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2842832.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "sudo cat /usr/share/ca-certificates/2842832.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-741792 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-741792 ssh "sudo systemctl is-active docker": exit status 1 (435.671749ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-741792 ssh "sudo systemctl is-active containerd": exit status 1 (350.58792ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-741792 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-741792 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-741792 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-741792 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 308494: os: process already finished
helpers_test.go:508: unable to kill pid 308310: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 version -o=json --components: (1.163749735s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-741792 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-741792 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5472a7c0-e114-4919-8b09-7d459a35125f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5472a7c0-e114-4919-8b09-7d459a35125f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004503819s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-741792 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-741792
localhost/kicbase/echo-server:functional-741792
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-741792 image ls --format short --alsologtostderr:
I0816 18:03:12.010573  313598 out.go:345] Setting OutFile to fd 1 ...
I0816 18:03:12.010898  313598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 18:03:12.010931  313598 out.go:358] Setting ErrFile to fd 2...
I0816 18:03:12.010950  313598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 18:03:12.011265  313598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
I0816 18:03:12.012097  313598 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 18:03:12.012318  313598 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 18:03:12.013094  313598 cli_runner.go:164] Run: docker container inspect functional-741792 --format={{.State.Status}}
I0816 18:03:12.040558  313598 ssh_runner.go:195] Run: systemctl --version
I0816 18:03:12.040661  313598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-741792
I0816 18:03:12.074329  313598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/functional-741792/id_rsa Username:docker}
I0816 18:03:12.173870  313598 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-741792 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/kicbase/echo-server           | functional-741792  | ce2d2cda2d858 | 4.79MB |
| localhost/minikube-local-cache-test     | functional-741792  | b272f31094273 | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | cd0f0ae0ec9e0 | 92.6MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | fcb0683e6bdbd | 86.9MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | 71d55d66fd4ee | 95.9MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | fbbbd428abb4d | 67MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | d5e283bc63d43 | 90.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| docker.io/library/nginx                 | alpine             | 70594c812316a | 48.4MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| localhost/my-image                      | functional-741792  | af224fb095d88 | 1.64MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/library/nginx                 | latest             | a9dfdba8b7190 | 197MB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-741792 image ls --format table --alsologtostderr:
I0816 18:03:15.749330  314002 out.go:345] Setting OutFile to fd 1 ...
I0816 18:03:15.749543  314002 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 18:03:15.749596  314002 out.go:358] Setting ErrFile to fd 2...
I0816 18:03:15.749617  314002 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 18:03:15.749888  314002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
I0816 18:03:15.751084  314002 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 18:03:15.751231  314002 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 18:03:15.751734  314002 cli_runner.go:164] Run: docker container inspect functional-741792 --format={{.State.Status}}
I0816 18:03:15.796548  314002 ssh_runner.go:195] Run: systemctl --version
I0816 18:03:15.796624  314002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-741792
I0816 18:03:15.815753  314002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/functional-741792/id_rsa Username:docker}
I0816 18:03:15.913512  314002 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image ls --format json --alsologtostderr
2024/08/16 18:03:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-741792 image ls --format json --alsologtostderr:
[{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172049"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17
a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"b272f31094273ce3983146842550727e366b85405a4cdbbff097333eda363464","repoDigests":["localhost/minikube-local-cache-test@sha256:3c315a3b6af3e0be48e57a70c5c2221029128eb72e927cc5dd6e292f9a5888e1"],"repoTags":["localhost/minikube-local-cache-test:functional-741792"],"size":"3328"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a
8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"8cb2091f603e75187e2f6226c59
01d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808","registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67007814"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/k
indest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"90290738"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"cfc034a63e318aeab6620e204df62c768e5b4093cb6011ad94b8c3859f03fd7d","repoDigests":["docker.io/library/868ebfe155ed11b23d8caa043bb6b159704092951a5d556607283d1843a83e67-tmp@sha256:4e499ad1001f8d0f5aa63b81cea0ebb6a909506e12e4198d44685c2a3e95bdba"],"repoTags":[],"size":"1637644"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-
minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"86930758"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:
v1.31.0"],"size":"95949719"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6","docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48397013"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a
2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-741792"],"size":"4788229"},{"id":"af224fb095d887121b354a83cadf7eab5238f288775abae84de2d88cb1312d93","repoDigests":["localhost/my-image@sha256:c0010b5c66a6555f73059efedf7ddfe12499e785b66348185f0171b7252bbc57"],"repoTags":["localhost/my-image:functional-741792"],"size":"1640226"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"92567005"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-741792 image ls --format json --alsologtostderr:
I0816 18:03:15.512196  313970 out.go:345] Setting OutFile to fd 1 ...
I0816 18:03:15.512370  313970 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 18:03:15.512383  313970 out.go:358] Setting ErrFile to fd 2...
I0816 18:03:15.512390  313970 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 18:03:15.512682  313970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
I0816 18:03:15.513379  313970 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 18:03:15.513615  313970 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 18:03:15.514118  313970 cli_runner.go:164] Run: docker container inspect functional-741792 --format={{.State.Status}}
I0816 18:03:15.533959  313970 ssh_runner.go:195] Run: systemctl --version
I0816 18:03:15.534021  313970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-741792
I0816 18:03:15.551096  313970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/functional-741792/id_rsa Username:docker}
I0816 18:03:15.641012  313970 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-741792 image ls --format yaml --alsologtostderr:
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55
repoTags:
- docker.io/library/nginx:latest
size: "197172049"
- id: b272f31094273ce3983146842550727e366b85405a4cdbbff097333eda363464
repoDigests:
- localhost/minikube-local-cache-test@sha256:3c315a3b6af3e0be48e57a70c5c2221029128eb72e927cc5dd6e292f9a5888e1
repoTags:
- localhost/minikube-local-cache-test:functional-741792
size: "3328"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "48397013"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "92567005"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "86930758"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
- registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67007814"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-741792
size: "4788229"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "95949719"
- id: d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "90290738"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-741792 image ls --format yaml --alsologtostderr:
I0816 18:03:12.299045  313634 out.go:345] Setting OutFile to fd 1 ...
I0816 18:03:12.299321  313634 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 18:03:12.299336  313634 out.go:358] Setting ErrFile to fd 2...
I0816 18:03:12.299343  313634 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 18:03:12.299619  313634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
I0816 18:03:12.300283  313634 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 18:03:12.300410  313634 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 18:03:12.300932  313634 cli_runner.go:164] Run: docker container inspect functional-741792 --format={{.State.Status}}
I0816 18:03:12.320974  313634 ssh_runner.go:195] Run: systemctl --version
I0816 18:03:12.321028  313634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-741792
I0816 18:03:12.344498  313634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/functional-741792/id_rsa Username:docker}
I0816 18:03:12.447659  313634 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-741792 ssh pgrep buildkitd: exit status 1 (395.017982ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image build -t localhost/my-image:functional-741792 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 image build -t localhost/my-image:functional-741792 testdata/build --alsologtostderr: (2.241207133s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-741792 image build -t localhost/my-image:functional-741792 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cfc034a63e3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-741792
--> af224fb095d
Successfully tagged localhost/my-image:functional-741792
af224fb095d887121b354a83cadf7eab5238f288775abae84de2d88cb1312d93
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-741792 image build -t localhost/my-image:functional-741792 testdata/build --alsologtostderr:
I0816 18:03:13.037672  313731 out.go:345] Setting OutFile to fd 1 ...
I0816 18:03:13.038446  313731 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 18:03:13.038481  313731 out.go:358] Setting ErrFile to fd 2...
I0816 18:03:13.038503  313731 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 18:03:13.038797  313731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
I0816 18:03:13.039527  313731 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 18:03:13.040241  313731 config.go:182] Loaded profile config "functional-741792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 18:03:13.040886  313731 cli_runner.go:164] Run: docker container inspect functional-741792 --format={{.State.Status}}
I0816 18:03:13.066710  313731 ssh_runner.go:195] Run: systemctl --version
I0816 18:03:13.066767  313731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-741792
I0816 18:03:13.089445  313731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/functional-741792/id_rsa Username:docker}
I0816 18:03:13.194074  313731 build_images.go:161] Building image from path: /tmp/build.4221173546.tar
I0816 18:03:13.194140  313731 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0816 18:03:13.216295  313731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4221173546.tar
I0816 18:03:13.220736  313731 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4221173546.tar: stat -c "%s %y" /var/lib/minikube/build/build.4221173546.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4221173546.tar': No such file or directory
I0816 18:03:13.220817  313731 ssh_runner.go:362] scp /tmp/build.4221173546.tar --> /var/lib/minikube/build/build.4221173546.tar (3072 bytes)
I0816 18:03:13.259911  313731 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4221173546
I0816 18:03:13.271725  313731 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4221173546 -xf /var/lib/minikube/build/build.4221173546.tar
I0816 18:03:13.285540  313731 crio.go:315] Building image: /var/lib/minikube/build/build.4221173546
I0816 18:03:13.285687  313731 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-741792 /var/lib/minikube/build/build.4221173546 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0816 18:03:15.185985  313731 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-741792 /var/lib/minikube/build/build.4221173546 --cgroup-manager=cgroupfs: (1.900249972s)
I0816 18:03:15.186072  313731 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4221173546
I0816 18:03:15.196192  313731 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4221173546.tar
I0816 18:03:15.206040  313731 build_images.go:217] Built localhost/my-image:functional-741792 from /tmp/build.4221173546.tar
I0816 18:03:15.206091  313731 build_images.go:133] succeeded building to: functional-741792
I0816 18:03:15.206098  313731 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-741792
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image load --daemon kicbase/echo-server:functional-741792 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 image load --daemon kicbase/echo-server:functional-741792 --alsologtostderr: (1.205428498s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image load --daemon kicbase/echo-server:functional-741792 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 image load --daemon kicbase/echo-server:functional-741792 --alsologtostderr: (1.700724927s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-741792
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image load --daemon kicbase/echo-server:functional-741792 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image save kicbase/echo-server:functional-741792 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image rm kicbase/echo-server:functional-741792 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-741792
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 image save --daemon kicbase/echo-server:functional-741792 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-741792
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-741792 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.105.178 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-741792 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-741792 /tmp/TestFunctionalparallelMountCmdany-port519604664/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723831354103564805" to /tmp/TestFunctionalparallelMountCmdany-port519604664/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723831354103564805" to /tmp/TestFunctionalparallelMountCmdany-port519604664/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723831354103564805" to /tmp/TestFunctionalparallelMountCmdany-port519604664/001/test-1723831354103564805
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-741792 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (440.545359ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 16 18:02 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 16 18:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 16 18:02 test-1723831354103564805
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh cat /mount-9p/test-1723831354103564805
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-741792 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c43f6102-d5f4-4666-a207-e32e3122ba84] Pending
helpers_test.go:344: "busybox-mount" [c43f6102-d5f4-4666-a207-e32e3122ba84] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c43f6102-d5f4-4666-a207-e32e3122ba84] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c43f6102-d5f4-4666-a207-e32e3122ba84] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.006683938s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-741792 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-741792 /tmp/TestFunctionalparallelMountCmdany-port519604664/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-741792 /tmp/TestFunctionalparallelMountCmdspecific-port2174471698/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-741792 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (589.366714ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-741792 /tmp/TestFunctionalparallelMountCmdspecific-port2174471698/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-741792 ssh "sudo umount -f /mount-9p": exit status 1 (371.518988ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-741792 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-741792 /tmp/TestFunctionalparallelMountCmdspecific-port2174471698/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-741792 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3118818688/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-741792 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3118818688/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-741792 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3118818688/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-741792 ssh "findmnt -T" /mount1: exit status 1 (1.021547357s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0816 18:02:45.771295  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:02:45.778188  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:02:45.789542  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:02:45.810862  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:02:45.852209  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:02:45.933575  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:02:46.095023  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "findmnt -T" /mount1
E0816 18:02:46.416794  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 ssh "findmnt -T" /mount3
E0816 18:02:47.058612  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-741792 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-741792 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3118818688/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-741792 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3118818688/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-741792 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3118818688/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-741792 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-741792 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-fmghc" [5bba9576-439b-43be-a053-8630e42c992d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-fmghc" [5bba9576-439b-43be-a053-8630e42c992d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.01790674s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "331.780872ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "60.248326ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "331.206323ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "51.234536ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 service list
E0816 18:03:06.267033  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1459: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 service list: (1.437389886s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-arm64 -p functional-741792 service list -o json: (1.42688825s)
functional_test.go:1494: Took "1.42699471s" to run "out/minikube-linux-arm64 -p functional-741792 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32507
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-741792 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32507
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-741792
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-741792
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-741792
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (176.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-831885 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0816 18:03:26.749106  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:04:07.712121  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:05:29.633796  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-831885 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m55.236474761s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (176.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-831885 -- rollout status deployment/busybox: (3.727598122s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-5znn8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-cdltp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-g5q69 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-5znn8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-cdltp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-g5q69 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-5znn8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-cdltp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-g5q69 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-5znn8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-5znn8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-cdltp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-cdltp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-g5q69 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-831885 -- exec busybox-7dff88458-g5q69 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-831885 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-831885 -v=7 --alsologtostderr: (34.279574463s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-831885 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-831885 status --output json -v=7 --alsologtostderr: (1.266740709s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp testdata/cp-test.txt ha-831885:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3589809381/001/cp-test_ha-831885.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885:/home/docker/cp-test.txt ha-831885-m02:/home/docker/cp-test_ha-831885_ha-831885-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m02 "sudo cat /home/docker/cp-test_ha-831885_ha-831885-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885:/home/docker/cp-test.txt ha-831885-m03:/home/docker/cp-test_ha-831885_ha-831885-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m03 "sudo cat /home/docker/cp-test_ha-831885_ha-831885-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885:/home/docker/cp-test.txt ha-831885-m04:/home/docker/cp-test_ha-831885_ha-831885-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m04 "sudo cat /home/docker/cp-test_ha-831885_ha-831885-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp testdata/cp-test.txt ha-831885-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3589809381/001/cp-test_ha-831885-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m02:/home/docker/cp-test.txt ha-831885:/home/docker/cp-test_ha-831885-m02_ha-831885.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885 "sudo cat /home/docker/cp-test_ha-831885-m02_ha-831885.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m02:/home/docker/cp-test.txt ha-831885-m03:/home/docker/cp-test_ha-831885-m02_ha-831885-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m03 "sudo cat /home/docker/cp-test_ha-831885-m02_ha-831885-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m02:/home/docker/cp-test.txt ha-831885-m04:/home/docker/cp-test_ha-831885-m02_ha-831885-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m04 "sudo cat /home/docker/cp-test_ha-831885-m02_ha-831885-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp testdata/cp-test.txt ha-831885-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3589809381/001/cp-test_ha-831885-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m03:/home/docker/cp-test.txt ha-831885:/home/docker/cp-test_ha-831885-m03_ha-831885.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885 "sudo cat /home/docker/cp-test_ha-831885-m03_ha-831885.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m03:/home/docker/cp-test.txt ha-831885-m02:/home/docker/cp-test_ha-831885-m03_ha-831885-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m02 "sudo cat /home/docker/cp-test_ha-831885-m03_ha-831885-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m03:/home/docker/cp-test.txt ha-831885-m04:/home/docker/cp-test_ha-831885-m03_ha-831885-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m04 "sudo cat /home/docker/cp-test_ha-831885-m03_ha-831885-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp testdata/cp-test.txt ha-831885-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3589809381/001/cp-test_ha-831885-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m04:/home/docker/cp-test.txt ha-831885:/home/docker/cp-test_ha-831885-m04_ha-831885.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885 "sudo cat /home/docker/cp-test_ha-831885-m04_ha-831885.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m04:/home/docker/cp-test.txt ha-831885-m02:/home/docker/cp-test_ha-831885-m04_ha-831885-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m02 "sudo cat /home/docker/cp-test_ha-831885-m04_ha-831885-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 cp ha-831885-m04:/home/docker/cp-test.txt ha-831885-m03:/home/docker/cp-test_ha-831885-m04_ha-831885-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 ssh -n ha-831885-m03 "sudo cat /home/docker/cp-test_ha-831885-m04_ha-831885-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 node stop m02 -v=7 --alsologtostderr
E0816 18:07:22.338460  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:22.344814  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:22.356250  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:22.377647  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:22.419096  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:22.500452  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:22.661867  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:22.983632  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:23.625667  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:24.908067  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:27.470697  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-831885 node stop m02 -v=7 --alsologtostderr: (12.037876695s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr: exit status 7 (750.720205ms)

                                                
                                                
-- stdout --
	ha-831885
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-831885-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-831885-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-831885-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:07:30.703491  329705 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:07:30.703659  329705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:07:30.703671  329705 out.go:358] Setting ErrFile to fd 2...
	I0816 18:07:30.703677  329705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:07:30.704002  329705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
	I0816 18:07:30.704299  329705 out.go:352] Setting JSON to false
	I0816 18:07:30.704344  329705 mustload.go:65] Loading cluster: ha-831885
	I0816 18:07:30.704409  329705 notify.go:220] Checking for updates...
	I0816 18:07:30.704830  329705 config.go:182] Loaded profile config "ha-831885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:07:30.704843  329705 status.go:255] checking status of ha-831885 ...
	I0816 18:07:30.705467  329705 cli_runner.go:164] Run: docker container inspect ha-831885 --format={{.State.Status}}
	I0816 18:07:30.727721  329705 status.go:330] ha-831885 host status = "Running" (err=<nil>)
	I0816 18:07:30.727746  329705 host.go:66] Checking if "ha-831885" exists ...
	I0816 18:07:30.728344  329705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-831885
	I0816 18:07:30.762705  329705 host.go:66] Checking if "ha-831885" exists ...
	I0816 18:07:30.763006  329705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 18:07:30.763051  329705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-831885
	I0816 18:07:30.784559  329705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/ha-831885/id_rsa Username:docker}
	I0816 18:07:30.874792  329705 ssh_runner.go:195] Run: systemctl --version
	I0816 18:07:30.879544  329705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:07:30.892187  329705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 18:07:30.954683  329705 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-16 18:07:30.942766716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 18:07:30.955357  329705 kubeconfig.go:125] found "ha-831885" server: "https://192.168.49.254:8443"
	I0816 18:07:30.955393  329705 api_server.go:166] Checking apiserver status ...
	I0816 18:07:30.955439  329705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:07:30.967528  329705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1409/cgroup
	I0816 18:07:30.978573  329705 api_server.go:182] apiserver freezer: "2:freezer:/docker/c04c4353a879a8f24001cc824713a6934ee15fc76610ff8daf5045e8281f1111/crio/crio-f246473141d33a26f471f324d18ce820034e27d603bfcb59adb3d3cd70e14705"
	I0816 18:07:30.978647  329705 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c04c4353a879a8f24001cc824713a6934ee15fc76610ff8daf5045e8281f1111/crio/crio-f246473141d33a26f471f324d18ce820034e27d603bfcb59adb3d3cd70e14705/freezer.state
	I0816 18:07:30.987982  329705 api_server.go:204] freezer state: "THAWED"
	I0816 18:07:30.988017  329705 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0816 18:07:30.996880  329705 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0816 18:07:30.996911  329705 status.go:422] ha-831885 apiserver status = Running (err=<nil>)
	I0816 18:07:30.996924  329705 status.go:257] ha-831885 status: &{Name:ha-831885 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:07:30.996941  329705 status.go:255] checking status of ha-831885-m02 ...
	I0816 18:07:30.997248  329705 cli_runner.go:164] Run: docker container inspect ha-831885-m02 --format={{.State.Status}}
	I0816 18:07:31.017183  329705 status.go:330] ha-831885-m02 host status = "Stopped" (err=<nil>)
	I0816 18:07:31.017210  329705 status.go:343] host is not running, skipping remaining checks
	I0816 18:07:31.017237  329705 status.go:257] ha-831885-m02 status: &{Name:ha-831885-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:07:31.017264  329705 status.go:255] checking status of ha-831885-m03 ...
	I0816 18:07:31.017655  329705 cli_runner.go:164] Run: docker container inspect ha-831885-m03 --format={{.State.Status}}
	I0816 18:07:31.034031  329705 status.go:330] ha-831885-m03 host status = "Running" (err=<nil>)
	I0816 18:07:31.034057  329705 host.go:66] Checking if "ha-831885-m03" exists ...
	I0816 18:07:31.034376  329705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-831885-m03
	I0816 18:07:31.057751  329705 host.go:66] Checking if "ha-831885-m03" exists ...
	I0816 18:07:31.058125  329705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 18:07:31.058184  329705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-831885-m03
	I0816 18:07:31.078389  329705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/ha-831885-m03/id_rsa Username:docker}
	I0816 18:07:31.169960  329705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:07:31.182853  329705 kubeconfig.go:125] found "ha-831885" server: "https://192.168.49.254:8443"
	I0816 18:07:31.182885  329705 api_server.go:166] Checking apiserver status ...
	I0816 18:07:31.182929  329705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:07:31.194153  329705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1327/cgroup
	I0816 18:07:31.204538  329705 api_server.go:182] apiserver freezer: "2:freezer:/docker/5eea3486d913807c102b9e4c3394ea1eef72ccd9b72c67c2ba1c6bd712c1ecf5/crio/crio-ff7525c2e9702f6930cf05243819893156e3c13099d9d27cc1f627c24513fe98"
	I0816 18:07:31.204709  329705 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5eea3486d913807c102b9e4c3394ea1eef72ccd9b72c67c2ba1c6bd712c1ecf5/crio/crio-ff7525c2e9702f6930cf05243819893156e3c13099d9d27cc1f627c24513fe98/freezer.state
	I0816 18:07:31.214002  329705 api_server.go:204] freezer state: "THAWED"
	I0816 18:07:31.214031  329705 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0816 18:07:31.223077  329705 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0816 18:07:31.223123  329705 status.go:422] ha-831885-m03 apiserver status = Running (err=<nil>)
	I0816 18:07:31.223133  329705 status.go:257] ha-831885-m03 status: &{Name:ha-831885-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:07:31.223171  329705 status.go:255] checking status of ha-831885-m04 ...
	I0816 18:07:31.223545  329705 cli_runner.go:164] Run: docker container inspect ha-831885-m04 --format={{.State.Status}}
	I0816 18:07:31.241004  329705 status.go:330] ha-831885-m04 host status = "Running" (err=<nil>)
	I0816 18:07:31.241032  329705 host.go:66] Checking if "ha-831885-m04" exists ...
	I0816 18:07:31.241359  329705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-831885-m04
	I0816 18:07:31.260415  329705 host.go:66] Checking if "ha-831885-m04" exists ...
	I0816 18:07:31.260869  329705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 18:07:31.260932  329705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-831885-m04
	I0816 18:07:31.279330  329705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/ha-831885-m04/id_rsa Username:docker}
	I0816 18:07:31.374061  329705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:07:31.387458  329705 status.go:257] ha-831885-m04 status: &{Name:ha-831885-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 node start m02 -v=7 --alsologtostderr
E0816 18:07:32.592298  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:42.834555  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:45.770471  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-831885 node start m02 -v=7 --alsologtostderr: (30.191698074s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr
E0816 18:08:03.316163  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr: (1.313960937s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.049684204s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-831885 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-831885 -v=7 --alsologtostderr
E0816 18:08:13.475618  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:44.277554  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-831885 -v=7 --alsologtostderr: (37.061413581s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-831885 --wait=true -v=7 --alsologtostderr
E0816 18:10:06.198968  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-831885 --wait=true -v=7 --alsologtostderr: (1m47.17951568s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-831885
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-831885 node delete m03 -v=7 --alsologtostderr: (11.734918267s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr: (1.055629196s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-831885 stop -v=7 --alsologtostderr: (35.752133076s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr: exit status 7 (114.096407ms)

                                                
                                                
-- stdout --
	ha-831885
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-831885-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-831885-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:11:22.373549  343661 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:11:22.373728  343661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:11:22.373759  343661 out.go:358] Setting ErrFile to fd 2...
	I0816 18:11:22.373780  343661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:11:22.374062  343661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
	I0816 18:11:22.374332  343661 out.go:352] Setting JSON to false
	I0816 18:11:22.374417  343661 mustload.go:65] Loading cluster: ha-831885
	I0816 18:11:22.374489  343661 notify.go:220] Checking for updates...
	I0816 18:11:22.374899  343661 config.go:182] Loaded profile config "ha-831885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:11:22.374979  343661 status.go:255] checking status of ha-831885 ...
	I0816 18:11:22.375837  343661 cli_runner.go:164] Run: docker container inspect ha-831885 --format={{.State.Status}}
	I0816 18:11:22.393779  343661 status.go:330] ha-831885 host status = "Stopped" (err=<nil>)
	I0816 18:11:22.393801  343661 status.go:343] host is not running, skipping remaining checks
	I0816 18:11:22.393809  343661 status.go:257] ha-831885 status: &{Name:ha-831885 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:11:22.393844  343661 status.go:255] checking status of ha-831885-m02 ...
	I0816 18:11:22.394151  343661 cli_runner.go:164] Run: docker container inspect ha-831885-m02 --format={{.State.Status}}
	I0816 18:11:22.418113  343661 status.go:330] ha-831885-m02 host status = "Stopped" (err=<nil>)
	I0816 18:11:22.418131  343661 status.go:343] host is not running, skipping remaining checks
	I0816 18:11:22.418138  343661 status.go:257] ha-831885-m02 status: &{Name:ha-831885-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:11:22.418158  343661 status.go:255] checking status of ha-831885-m04 ...
	I0816 18:11:22.418450  343661 cli_runner.go:164] Run: docker container inspect ha-831885-m04 --format={{.State.Status}}
	I0816 18:11:22.435607  343661 status.go:330] ha-831885-m04 host status = "Stopped" (err=<nil>)
	I0816 18:11:22.435631  343661 status.go:343] host is not running, skipping remaining checks
	I0816 18:11:22.435638  343661 status.go:257] ha-831885-m04 status: &{Name:ha-831885-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (104.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-831885 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0816 18:12:22.338428  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:12:45.770361  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:12:50.040274  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-831885 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m43.180852858s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (104.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-831885 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-831885 --control-plane -v=7 --alsologtostderr: (1m14.533934491s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-831885 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.66s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-818182 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-818182 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (51.658693932s)
--- PASS: TestJSONOutput/start/Command (51.66s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-818182 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-818182 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-818182 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-818182 --output=json --user=testUser: (5.982333413s)
--- PASS: TestJSONOutput/stop/Command (5.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-239462 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-239462 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.487589ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8a9ed595-7f90-414c-8619-c5f2de6aabef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-239462] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7bc2e546-9dd4-4c7b-8c9d-8da283a89f74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19461"}}
	{"specversion":"1.0","id":"d94a1c77-8b4b-4671-8b99-a488aa1ac551","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e143f697-f838-4356-a803-e3851f9b3cb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig"}}
	{"specversion":"1.0","id":"64379802-3c2f-4d63-b2bd-352ce0427873","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube"}}
	{"specversion":"1.0","id":"40f015e7-6b85-4551-89f0-b290b7326141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"33fe742e-1696-48d8-98cd-398ec7dfff15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"49ff7fa9-72f9-425c-8dab-1126ac1903e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-239462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-239462
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-588172 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-588172 --network=: (40.175340219s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-588172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-588172
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-588172: (2.064462654s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.26s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-633144 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-633144 --network=bridge: (31.459323s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-633144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-633144
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-633144: (1.929934652s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.41s)

                                                
                                    
x
+
TestKicExistingNetwork (34.51s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-995826 --network=existing-network
E0816 18:17:22.338783  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-995826 --network=existing-network: (32.363142353s)
helpers_test.go:175: Cleaning up "existing-network-995826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-995826
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-995826: (1.995621433s)
--- PASS: TestKicExistingNetwork (34.51s)

                                                
                                    
x
+
TestKicCustomSubnet (34.75s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-834978 --subnet=192.168.60.0/24
E0816 18:17:45.770228  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-834978 --subnet=192.168.60.0/24: (32.490182864s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-834978 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-834978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-834978
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-834978: (2.23533638s)
--- PASS: TestKicCustomSubnet (34.75s)

                                                
                                    
x
+
TestKicStaticIP (32.94s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-176446 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-176446 --static-ip=192.168.200.200: (30.665466439s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-176446 ip
helpers_test.go:175: Cleaning up "static-ip-176446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-176446
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-176446: (2.125011727s)
--- PASS: TestKicStaticIP (32.94s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.39s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-759028 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-759028 --driver=docker  --container-runtime=crio: (33.540323033s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-761555 --driver=docker  --container-runtime=crio
E0816 18:19:08.837004  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-761555 --driver=docker  --container-runtime=crio: (31.249150508s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-759028
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-761555
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-761555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-761555
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-761555: (1.978128953s)
helpers_test.go:175: Cleaning up "first-759028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-759028
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-759028: (2.387719639s)
--- PASS: TestMinikubeProfile (70.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-139347 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-139347 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.329378131s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-139347 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-151843 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-151843 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.849952693s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-151843 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.12s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-139347 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-139347 --alsologtostderr -v=5: (2.123928921s)
--- PASS: TestMountStart/serial/DeleteFirst (2.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-151843 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-151843
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-151843: (1.219091188s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-151843
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-151843: (6.820243053s)
--- PASS: TestMountStart/serial/RestartStopped (7.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-151843 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-689145 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-689145 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.59703862s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-689145 -- rollout status deployment/busybox: (3.136081049s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- exec busybox-7dff88458-dsk28 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- exec busybox-7dff88458-n878s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- exec busybox-7dff88458-dsk28 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- exec busybox-7dff88458-n878s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- exec busybox-7dff88458-dsk28 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- exec busybox-7dff88458-n878s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- exec busybox-7dff88458-dsk28 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- exec busybox-7dff88458-dsk28 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- exec busybox-7dff88458-n878s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689145 -- exec busybox-7dff88458-n878s -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-689145 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-689145 -v 3 --alsologtostderr: (28.299603288s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-689145 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp testdata/cp-test.txt multinode-689145:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp multinode-689145:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324109693/001/cp-test_multinode-689145.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp multinode-689145:/home/docker/cp-test.txt multinode-689145-m02:/home/docker/cp-test_multinode-689145_multinode-689145-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m02 "sudo cat /home/docker/cp-test_multinode-689145_multinode-689145-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp multinode-689145:/home/docker/cp-test.txt multinode-689145-m03:/home/docker/cp-test_multinode-689145_multinode-689145-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m03 "sudo cat /home/docker/cp-test_multinode-689145_multinode-689145-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp testdata/cp-test.txt multinode-689145-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp multinode-689145-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324109693/001/cp-test_multinode-689145-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp multinode-689145-m02:/home/docker/cp-test.txt multinode-689145:/home/docker/cp-test_multinode-689145-m02_multinode-689145.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145 "sudo cat /home/docker/cp-test_multinode-689145-m02_multinode-689145.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp multinode-689145-m02:/home/docker/cp-test.txt multinode-689145-m03:/home/docker/cp-test_multinode-689145-m02_multinode-689145-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m03 "sudo cat /home/docker/cp-test_multinode-689145-m02_multinode-689145-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp testdata/cp-test.txt multinode-689145-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp multinode-689145-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324109693/001/cp-test_multinode-689145-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp multinode-689145-m03:/home/docker/cp-test.txt multinode-689145:/home/docker/cp-test_multinode-689145-m03_multinode-689145.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145 "sudo cat /home/docker/cp-test_multinode-689145-m03_multinode-689145.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 cp multinode-689145-m03:/home/docker/cp-test.txt multinode-689145-m02:/home/docker/cp-test_multinode-689145-m03_multinode-689145-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 ssh -n multinode-689145-m02 "sudo cat /home/docker/cp-test_multinode-689145-m03_multinode-689145-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-689145 node stop m03: (1.199891178s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-689145 status: exit status 7 (514.661591ms)

                                                
                                                
-- stdout --
	multinode-689145
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-689145-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-689145-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-689145 status --alsologtostderr: exit status 7 (542.618425ms)

                                                
                                                
-- stdout --
	multinode-689145
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-689145-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-689145-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:22:17.694531  396939 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:22:17.694764  396939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:22:17.694796  396939 out.go:358] Setting ErrFile to fd 2...
	I0816 18:22:17.694816  396939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:22:17.695118  396939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
	I0816 18:22:17.695481  396939 out.go:352] Setting JSON to false
	I0816 18:22:17.695557  396939 mustload.go:65] Loading cluster: multinode-689145
	I0816 18:22:17.695653  396939 notify.go:220] Checking for updates...
	I0816 18:22:17.696936  396939 config.go:182] Loaded profile config "multinode-689145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:22:17.696984  396939 status.go:255] checking status of multinode-689145 ...
	I0816 18:22:17.697684  396939 cli_runner.go:164] Run: docker container inspect multinode-689145 --format={{.State.Status}}
	I0816 18:22:17.720828  396939 status.go:330] multinode-689145 host status = "Running" (err=<nil>)
	I0816 18:22:17.720851  396939 host.go:66] Checking if "multinode-689145" exists ...
	I0816 18:22:17.721153  396939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-689145
	I0816 18:22:17.759800  396939 host.go:66] Checking if "multinode-689145" exists ...
	I0816 18:22:17.760128  396939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 18:22:17.760172  396939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689145
	I0816 18:22:17.779792  396939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33269 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/multinode-689145/id_rsa Username:docker}
	I0816 18:22:17.870110  396939 ssh_runner.go:195] Run: systemctl --version
	I0816 18:22:17.876946  396939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:22:17.890094  396939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 18:22:17.953522  396939 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-16 18:22:17.944067976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 18:22:17.954184  396939 kubeconfig.go:125] found "multinode-689145" server: "https://192.168.67.2:8443"
	I0816 18:22:17.954215  396939 api_server.go:166] Checking apiserver status ...
	I0816 18:22:17.954265  396939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:22:17.965458  396939 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1353/cgroup
	I0816 18:22:17.975182  396939 api_server.go:182] apiserver freezer: "2:freezer:/docker/cc7a5053b93fbb666c9d9c89fab1291cfe3ebe60fce37d69dfb92a316f8b88e7/crio/crio-1ebe67401b25605c49cac3402432da3672caa8bb535f9a87c4189716d5698bb1"
	I0816 18:22:17.975257  396939 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cc7a5053b93fbb666c9d9c89fab1291cfe3ebe60fce37d69dfb92a316f8b88e7/crio/crio-1ebe67401b25605c49cac3402432da3672caa8bb535f9a87c4189716d5698bb1/freezer.state
	I0816 18:22:17.984302  396939 api_server.go:204] freezer state: "THAWED"
	I0816 18:22:17.984332  396939 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 18:22:17.993273  396939 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 18:22:17.993303  396939 status.go:422] multinode-689145 apiserver status = Running (err=<nil>)
	I0816 18:22:17.993316  396939 status.go:257] multinode-689145 status: &{Name:multinode-689145 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:22:17.993335  396939 status.go:255] checking status of multinode-689145-m02 ...
	I0816 18:22:17.993669  396939 cli_runner.go:164] Run: docker container inspect multinode-689145-m02 --format={{.State.Status}}
	I0816 18:22:18.017836  396939 status.go:330] multinode-689145-m02 host status = "Running" (err=<nil>)
	I0816 18:22:18.017865  396939 host.go:66] Checking if "multinode-689145-m02" exists ...
	I0816 18:22:18.018207  396939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-689145-m02
	I0816 18:22:18.037344  396939 host.go:66] Checking if "multinode-689145-m02" exists ...
	I0816 18:22:18.037683  396939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 18:22:18.037738  396939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689145-m02
	I0816 18:22:18.056326  396939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33274 SSHKeyPath:/home/jenkins/minikube-integration/19461-278896/.minikube/machines/multinode-689145-m02/id_rsa Username:docker}
	I0816 18:22:18.153690  396939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:22:18.166288  396939 status.go:257] multinode-689145-m02 status: &{Name:multinode-689145-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:22:18.166325  396939 status.go:255] checking status of multinode-689145-m03 ...
	I0816 18:22:18.166649  396939 cli_runner.go:164] Run: docker container inspect multinode-689145-m03 --format={{.State.Status}}
	I0816 18:22:18.184413  396939 status.go:330] multinode-689145-m03 host status = "Stopped" (err=<nil>)
	I0816 18:22:18.184444  396939 status.go:343] host is not running, skipping remaining checks
	I0816 18:22:18.184453  396939 status.go:257] multinode-689145-m03 status: &{Name:multinode-689145-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 node start m03 -v=7 --alsologtostderr
E0816 18:22:22.338084  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-689145 node start m03 -v=7 --alsologtostderr: (9.317462527s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-689145
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-689145
E0816 18:22:45.771194  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-689145: (24.848752193s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-689145 --wait=true -v=8 --alsologtostderr
E0816 18:23:45.402016  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-689145 --wait=true -v=8 --alsologtostderr: (1m13.314259161s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-689145
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-689145 node delete m03: (4.941746055s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-689145 stop: (23.63704802s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-689145 status: exit status 7 (88.72025ms)

                                                
                                                
-- stdout --
	multinode-689145
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-689145-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-689145 status --alsologtostderr: exit status 7 (95.663177ms)

                                                
                                                
-- stdout --
	multinode-689145
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-689145-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:24:35.977524  404749 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:24:35.977688  404749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:24:35.977700  404749 out.go:358] Setting ErrFile to fd 2...
	I0816 18:24:35.977706  404749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:24:35.977961  404749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
	I0816 18:24:35.978154  404749 out.go:352] Setting JSON to false
	I0816 18:24:35.978197  404749 mustload.go:65] Loading cluster: multinode-689145
	I0816 18:24:35.978288  404749 notify.go:220] Checking for updates...
	I0816 18:24:35.978615  404749 config.go:182] Loaded profile config "multinode-689145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:24:35.978633  404749 status.go:255] checking status of multinode-689145 ...
	I0816 18:24:35.979138  404749 cli_runner.go:164] Run: docker container inspect multinode-689145 --format={{.State.Status}}
	I0816 18:24:36.005066  404749 status.go:330] multinode-689145 host status = "Stopped" (err=<nil>)
	I0816 18:24:36.005094  404749 status.go:343] host is not running, skipping remaining checks
	I0816 18:24:36.005131  404749 status.go:257] multinode-689145 status: &{Name:multinode-689145 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:24:36.005168  404749 status.go:255] checking status of multinode-689145-m02 ...
	I0816 18:24:36.005523  404749 cli_runner.go:164] Run: docker container inspect multinode-689145-m02 --format={{.State.Status}}
	I0816 18:24:36.026699  404749 status.go:330] multinode-689145-m02 host status = "Stopped" (err=<nil>)
	I0816 18:24:36.026738  404749 status.go:343] host is not running, skipping remaining checks
	I0816 18:24:36.026748  404749 status.go:257] multinode-689145-m02 status: &{Name:multinode-689145-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-689145 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-689145 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (55.638203414s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689145 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.30s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-689145
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-689145-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-689145-m02 --driver=docker  --container-runtime=crio: exit status 14 (82.483113ms)

                                                
                                                
-- stdout --
	* [multinode-689145-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-689145-m02' is duplicated with machine name 'multinode-689145-m02' in profile 'multinode-689145'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-689145-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-689145-m03 --driver=docker  --container-runtime=crio: (32.446529019s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-689145
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-689145: exit status 80 (333.247055ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-689145 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-689145-m03 already exists in multinode-689145-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-689145-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-689145-m03: (1.952715212s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.87s)

                                                
                                    
x
+
TestPreload (124.38s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-225989 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0816 18:27:22.337893  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-225989 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.540973296s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-225989 image pull gcr.io/k8s-minikube/busybox
E0816 18:27:45.770813  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-225989 image pull gcr.io/k8s-minikube/busybox: (1.905916849s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-225989
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-225989: (5.806844139s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-225989 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-225989 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.368156059s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-225989 image list
helpers_test.go:175: Cleaning up "test-preload-225989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-225989
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-225989: (2.461077816s)
--- PASS: TestPreload (124.38s)

                                                
                                    
x
+
TestScheduledStopUnix (104.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-508181 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-508181 --memory=2048 --driver=docker  --container-runtime=crio: (28.631887497s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-508181 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-508181 -n scheduled-stop-508181
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-508181 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-508181 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-508181 -n scheduled-stop-508181
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-508181
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-508181 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-508181
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-508181: exit status 7 (66.221657ms)

                                                
                                                
-- stdout --
	scheduled-stop-508181
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-508181 -n scheduled-stop-508181
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-508181 -n scheduled-stop-508181: exit status 7 (63.588287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-508181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-508181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-508181: (4.737530244s)
--- PASS: TestScheduledStopUnix (104.91s)

                                                
                                    
x
+
TestInsufficientStorage (13.53s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-882161 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-882161 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.031004291s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a0ca3556-6ec9-4d0b-a46a-336b7586800d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-882161] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c447b5c1-d8ff-4b39-a911-1d9149dccabb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19461"}}
	{"specversion":"1.0","id":"a655f49e-dc30-4870-b998-498f237267b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bcb42591-2f56-45a9-b8ae-2fbd462f4014","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig"}}
	{"specversion":"1.0","id":"7cbf5585-1e1f-4751-8b27-4fa0447aa79d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube"}}
	{"specversion":"1.0","id":"c251ee33-1774-4268-8ee3-9866f687cd86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"220b6e4b-5bf6-4d74-b61e-dc7e60053644","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e2ceed40-a373-4700-9222-c3492cacdf60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"664672c9-7a96-4dd6-a1c7-2d1e1e9fd683","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5cecc306-9449-47ab-8fc2-af60e949a6a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3cb61c2e-a921-4c07-a15e-951312b247ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b1d232ba-84ab-4754-a092-9ffb900fe4f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-882161\" primary control-plane node in \"insufficient-storage-882161\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f97cc42f-4777-4608-bdb7-0c9cb28f32b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723740748-19452 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1dd7c0e5-cbad-4a9f-a757-224f7e6972ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"65262b3d-97fb-4323-a71f-03bb7628cf86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-882161 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-882161 --output=json --layout=cluster: exit status 7 (293.72545ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-882161","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-882161","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:30:11.869587  422427 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-882161" does not appear in /home/jenkins/minikube-integration/19461-278896/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-882161 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-882161 --output=json --layout=cluster: exit status 7 (281.641927ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-882161","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-882161","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:30:12.151255  422487 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-882161" does not appear in /home/jenkins/minikube-integration/19461-278896/kubeconfig
	E0816 18:30:12.161943  422487 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/insufficient-storage-882161/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-882161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-882161
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-882161: (1.927671167s)
--- PASS: TestInsufficientStorage (13.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2935652945 start -p running-upgrade-037249 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2935652945 start -p running-upgrade-037249 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.464997573s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-037249 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-037249 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.748483451s)
helpers_test.go:175: Cleaning up "running-upgrade-037249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-037249
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-037249: (2.92040689s)
--- PASS: TestRunningBinaryUpgrade (63.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (467.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-424742 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-424742 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m10.712139034s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-424742
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-424742: (2.903759615s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-424742 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-424742 status --format={{.Host}}: exit status 7 (92.014112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-424742 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0816 18:32:45.771061  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-424742 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.723081131s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-424742 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-424742 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-424742 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (140.392156ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-424742] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-424742
	    minikube start -p kubernetes-upgrade-424742 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4247422 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-424742 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-424742 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0816 18:37:22.338291  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-424742 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m53.101018237s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-424742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-424742
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-424742: (2.261353006s)
--- PASS: TestKubernetesUpgrade (467.09s)

                                                
                                    
x
+
TestMissingContainerUpgrade (170.7s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3634732867 start -p missing-upgrade-860753 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3634732867 start -p missing-upgrade-860753 --memory=2200 --driver=docker  --container-runtime=crio: (1m33.994643857s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-860753
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-860753: (10.370242331s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-860753
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-860753 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0816 18:32:22.338408  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-860753 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.478865114s)
helpers_test.go:175: Cleaning up "missing-upgrade-860753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-860753
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-860753: (2.032437226s)
--- PASS: TestMissingContainerUpgrade (170.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-045154 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-045154 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (79.7343ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-045154] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-045154 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-045154 --driver=docker  --container-runtime=crio: (41.837286575s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-045154 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-045154 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-045154 --no-kubernetes --driver=docker  --container-runtime=crio: (6.648873724s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-045154 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-045154 status -o json: exit status 2 (400.8911ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-045154","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-045154
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-045154: (2.100970077s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-045154 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-045154 --no-kubernetes --driver=docker  --container-runtime=crio: (9.167015605s)
--- PASS: TestNoKubernetes/serial/Start (9.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-045154 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-045154 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.605522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-045154
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-045154: (1.270344543s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-045154 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-045154 --driver=docker  --container-runtime=crio: (7.048613184s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-045154 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-045154 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.66318ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (74.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3363417375 start -p stopped-upgrade-714168 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3363417375 start -p stopped-upgrade-714168 --memory=2200 --vm-driver=docker  --container-runtime=crio: (33.436687448s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3363417375 -p stopped-upgrade-714168 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3363417375 -p stopped-upgrade-714168 stop: (2.552763531s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-714168 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-714168 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.566178707s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (74.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-714168
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-714168: (1.095339395s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestPause/serial/Start (53.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-017354 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0816 18:35:48.838363  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-017354 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (53.355160204s)
--- PASS: TestPause/serial/Start (53.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (23.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-017354 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-017354 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.743239857s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (23.76s)

                                                
                                    
x
+
TestPause/serial/Pause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-017354 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.92s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-017354 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-017354 --output=json --layout=cluster: exit status 2 (441.809567ms)

                                                
                                                
-- stdout --
	{"Name":"pause-017354","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-017354","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-017354 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-017354 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-017354 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-017354 --alsologtostderr -v=5: (2.841365079s)
--- PASS: TestPause/serial/DeletePaused (2.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-017354
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-017354: exit status 1 (13.771874ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-017354: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-121621 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-121621 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (178.849894ms)

                                                
                                                
-- stdout --
	* [false-121621] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:37:34.324286  461439 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:37:34.324393  461439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:37:34.324399  461439 out.go:358] Setting ErrFile to fd 2...
	I0816 18:37:34.324404  461439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:37:34.324686  461439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-278896/.minikube/bin
	I0816 18:37:34.325104  461439 out.go:352] Setting JSON to false
	I0816 18:37:34.326024  461439 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8403,"bootTime":1723825052,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 18:37:34.326095  461439 start.go:139] virtualization:  
	I0816 18:37:34.328533  461439 out.go:177] * [false-121621] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 18:37:34.330426  461439 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:37:34.330523  461439 notify.go:220] Checking for updates...
	I0816 18:37:34.334272  461439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:37:34.336282  461439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-278896/kubeconfig
	I0816 18:37:34.338180  461439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-278896/.minikube
	I0816 18:37:34.340412  461439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 18:37:34.342348  461439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:37:34.344822  461439 config.go:182] Loaded profile config "kubernetes-upgrade-424742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:37:34.344928  461439 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:37:34.378936  461439 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 18:37:34.379132  461439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 18:37:34.439002  461439 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-16 18:37:34.429466939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 18:37:34.439121  461439 docker.go:307] overlay module found
	I0816 18:37:34.441608  461439 out.go:177] * Using the docker driver based on user configuration
	I0816 18:37:34.443443  461439 start.go:297] selected driver: docker
	I0816 18:37:34.443463  461439 start.go:901] validating driver "docker" against <nil>
	I0816 18:37:34.443478  461439 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:37:34.446071  461439 out.go:201] 
	W0816 18:37:34.447974  461439 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0816 18:37:34.449720  461439 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-121621 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-121621" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-121621" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19461-278896/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:37:16 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-424742
contexts:
- context:
cluster: kubernetes-upgrade-424742
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:37:16 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-424742
name: kubernetes-upgrade-424742
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-424742
user:
client-certificate: /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kubernetes-upgrade-424742/client.crt
client-key: /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kubernetes-upgrade-424742/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-121621

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-121621"

                                                
                                                
----------------------- debugLogs end: false-121621 [took: 3.21324145s] --------------------------------
helpers_test.go:175: Cleaning up "false-121621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-121621
--- PASS: TestNetworkPlugins/group/false (3.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (148.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-604089 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0816 18:40:25.404588  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-604089 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m28.723499281s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (148.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-564206 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-564206 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (1m10.256833936s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-604089 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [caf008f1-8847-4000-a5d1-f78d42e6fb92] Pending
helpers_test.go:344: "busybox" [caf008f1-8847-4000-a5d1-f78d42e6fb92] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [caf008f1-8847-4000-a5d1-f78d42e6fb92] Running
E0816 18:42:22.338283  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003802271s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-604089 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-604089 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-604089 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.405970914s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-604089 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-604089 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-604089 --alsologtostderr -v=3: (13.541883453s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-604089 -n old-k8s-version-604089
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-604089 -n old-k8s-version-604089: exit status 7 (89.545035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-604089 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (144.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-604089 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0816 18:42:45.770806  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-604089 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m23.792871119s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-604089 -n old-k8s-version-604089
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (144.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-564206 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0d706b04-baba-4390-97f2-2af2f07f81a5] Pending
helpers_test.go:344: "busybox" [0d706b04-baba-4390-97f2-2af2f07f81a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0d706b04-baba-4390-97f2-2af2f07f81a5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.006324275s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-564206 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-564206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-564206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.469778145s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-564206 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-564206 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-564206 --alsologtostderr -v=3: (12.302135822s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-564206 -n no-preload-564206
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-564206 -n no-preload-564206: exit status 7 (74.361868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-564206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (280.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-564206 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-564206 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m39.753332574s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-564206 -n no-preload-564206
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (280.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kmgvk" [ab530f6c-556d-49ee-b471-288899f73eb8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004254967s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kmgvk" [ab530f6c-556d-49ee-b471-288899f73eb8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004510847s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-604089 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-604089 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-604089 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-604089 -n old-k8s-version-604089
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-604089 -n old-k8s-version-604089: exit status 2 (339.979878ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-604089 -n old-k8s-version-604089
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-604089 -n old-k8s-version-604089: exit status 2 (341.259327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-604089 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-604089 -n old-k8s-version-604089
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-604089 -n old-k8s-version-604089
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-867834 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-867834 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (52.612494304s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-867834 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [48af58ff-13f1-479b-b481-c22b5b1a1398] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [48af58ff-13f1-479b-b481-c22b5b1a1398] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003519095s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-867834 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-867834 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-867834 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-867834 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-867834 --alsologtostderr -v=3: (11.938722611s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-867834 -n embed-certs-867834
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-867834 -n embed-certs-867834: exit status 7 (83.355034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-867834 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-867834 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 18:47:16.809493  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:16.815836  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:16.827396  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:16.848887  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:16.890284  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:16.971738  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:17.133335  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:17.454780  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:18.096708  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:19.378649  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:21.940310  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:22.338109  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:27.061656  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:37.303166  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:45.770299  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:47:57.789964  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-867834 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m57.040589836s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-867834 -n embed-certs-867834
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gzjcf" [28074168-f029-4f90-9f3c-3f5bd6fe8871] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003759585s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gzjcf" [28074168-f029-4f90-9f3c-3f5bd6fe8871] Running
E0816 18:48:38.752069  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003608035s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-564206 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-564206 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-564206 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-564206 -n no-preload-564206
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-564206 -n no-preload-564206: exit status 2 (320.899348ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-564206 -n no-preload-564206
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-564206 -n no-preload-564206: exit status 2 (319.048105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-564206 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-564206 -n no-preload-564206
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-564206 -n no-preload-564206
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-309677 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-309677 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (48.952192434s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-309677 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [06c49d76-f64a-469f-acd7-db16029e679b] Pending
helpers_test.go:344: "busybox" [06c49d76-f64a-469f-acd7-db16029e679b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [06c49d76-f64a-469f-acd7-db16029e679b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.042156124s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-309677 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-309677 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-309677 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.050613477s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-309677 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-309677 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-309677 --alsologtostderr -v=3: (12.003120436s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-309677 -n default-k8s-diff-port-309677
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-309677 -n default-k8s-diff-port-309677: exit status 7 (70.035483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-309677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (302.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-309677 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 18:50:00.674109  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-309677 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (5m2.255682483s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-309677 -n default-k8s-diff-port-309677
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (302.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vwld8" [05ab44f3-fe49-4ef6-a7aa-4adadc7d6e8f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004592685s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vwld8" [05ab44f3-fe49-4ef6-a7aa-4adadc7d6e8f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004165314s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-867834 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-867834 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-867834 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-867834 -n embed-certs-867834
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-867834 -n embed-certs-867834: exit status 2 (341.278611ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-867834 -n embed-certs-867834
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-867834 -n embed-certs-867834: exit status 2 (371.55971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-867834 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-867834 -n embed-certs-867834
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-867834 -n embed-certs-867834
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-812878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 18:52:16.810211  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:52:22.338740  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-812878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (34.431895079s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-812878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-812878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.402162348s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-812878 --alsologtostderr -v=3
E0816 18:52:28.842994  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-812878 --alsologtostderr -v=3: (1.287863686s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-812878 -n newest-cni-812878
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-812878 -n newest-cni-812878: exit status 7 (133.414258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-812878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-812878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 18:52:44.515966  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:52:45.771115  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-812878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (16.43102084s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-812878 -n newest-cni-812878
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-812878 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-812878 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-812878 -n newest-cni-812878
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-812878 -n newest-cni-812878: exit status 2 (318.554654ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-812878 -n newest-cni-812878
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-812878 -n newest-cni-812878: exit status 2 (302.376949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-812878 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-812878 -n newest-cni-812878
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-812878 -n newest-cni-812878
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0816 18:53:24.363489  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:24.369928  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:24.381279  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:24.402687  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:24.444172  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:24.525666  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:24.687504  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:25.009747  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:25.654913  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:26.936776  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:29.498781  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:34.620702  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:53:44.863101  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (53.563680976s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-121621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-121621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x77xj" [c81d15db-0b12-4415-bff3-1e1ded2d2b98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x77xj" [c81d15db-0b12-4415-bff3-1e1ded2d2b98] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003481436s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-121621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (54.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0816 18:54:46.307145  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (54.162658756s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (54.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xdq9b" [3ddeddde-4d16-4da7-97c7-e7e7f9ae6d97] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004102337s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xdq9b" [3ddeddde-4d16-4da7-97c7-e7e7f9ae6d97] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004001176s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-309677 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ktv2c" [89eb63e6-e869-45dd-8dc7-0d020bd14c20] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005187498s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-309677 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-309677 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-309677 -n default-k8s-diff-port-309677
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-309677 -n default-k8s-diff-port-309677: exit status 2 (325.29168ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-309677 -n default-k8s-diff-port-309677
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-309677 -n default-k8s-diff-port-309677: exit status 2 (334.812794ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-309677 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-309677 -n default-k8s-diff-port-309677
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-309677 -n default-k8s-diff-port-309677
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)
E0816 18:59:27.754293  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:36.033562  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:36.040108  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:36.051638  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:36.073100  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:36.114630  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:36.196159  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:36.357754  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:36.679292  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:37.321649  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:38.603506  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:41.164960  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:46.286968  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:59:56.528358  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/default-k8s-diff-port-309677/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-121621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-121621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rxfmv" [b963660d-706a-4df7-b2b8-7e63f904d275] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rxfmv" [b963660d-706a-4df7-b2b8-7e63f904d275] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.012089425s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m9.549679256s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-121621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0816 18:56:08.229233  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.282475377s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2p2hs" [6b7b9112-6a7a-4313-9b51-883147e2bbe4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004231319s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-121621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-121621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xzb4d" [15e9f884-d5cc-4e73-9b25-23b4220abb0e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xzb4d" [15e9f884-d5cc-4e73-9b25-23b4220abb0e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.004238768s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-121621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-121621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-121621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zw6rh" [1100f9d2-ecd3-478f-ac48-a9b3420a43fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zw6rh" [1100f9d2-ecd3-478f-ac48-a9b3420a43fd] Running
E0816 18:57:05.406012  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004181048s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-121621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0816 18:57:16.815496  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/old-k8s-version-604089/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:57:22.337792  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/functional-741792/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (51.371272853s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0816 18:57:45.770994  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/addons-035693/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.926272805s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-121621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-121621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xxgpw" [bae8b091-c2f7-4dee-9e37-3c75ff9cbea8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xxgpw" [bae8b091-c2f7-4dee-9e37-3c75ff9cbea8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003434096s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-121621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-d4nwj" [ceae4521-d491-463d-86f0-2c80c6afa280] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003691334s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-121621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m24.936239407s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-121621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-121621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wqbh6" [e28ea0eb-1c1d-46f8-93fb-eeac3f8464a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0816 18:58:46.777818  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:58:46.784185  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:58:46.795524  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:58:46.817288  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:58:46.858552  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:58:46.939902  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:58:47.101961  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:58:47.423421  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:58:48.065400  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:58:49.347440  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-wqbh6" [e28ea0eb-1c1d-46f8-93fb-eeac3f8464a6] Running
E0816 18:58:51.908930  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:58:52.071370  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/no-preload-564206/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.004803313s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-121621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-121621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-121621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zwjgv" [ba30a394-92d5-4a36-80e9-f712e4faceec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0816 19:00:08.715919  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/auto-121621/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-zwjgv" [ba30a394-92d5-4a36-80e9-f712e4faceec] Running
E0816 19:00:12.062105  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kindnet-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 19:00:12.068524  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kindnet-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 19:00:12.080041  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kindnet-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 19:00:12.101460  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kindnet-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 19:00:12.142930  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kindnet-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 19:00:12.224452  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kindnet-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 19:00:12.386155  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kindnet-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 19:00:12.708223  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kindnet-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 19:00:13.350414  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kindnet-121621/client.crt: no such file or directory" logger="UnhandledError"
E0816 19:00:14.632506  284283 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kindnet-121621/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004014363s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-121621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-121621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-240993 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-240993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-240993
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-205274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-205274
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-121621 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-121621" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-121621" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19461-278896/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:37:16 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-424742
contexts:
- context:
cluster: kubernetes-upgrade-424742
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:37:16 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-424742
name: kubernetes-upgrade-424742
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-424742
user:
client-certificate: /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kubernetes-upgrade-424742/client.crt
client-key: /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kubernetes-upgrade-424742/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-121621

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-121621"

                                                
                                                
----------------------- debugLogs end: kubenet-121621 [took: 3.224440977s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-121621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-121621
--- SKIP: TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-121621 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-121621" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19461-278896/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:37:16 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-424742
contexts:
- context:
cluster: kubernetes-upgrade-424742
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:37:16 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-424742
name: kubernetes-upgrade-424742
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-424742
user:
client-certificate: /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kubernetes-upgrade-424742/client.crt
client-key: /home/jenkins/minikube-integration/19461-278896/.minikube/profiles/kubernetes-upgrade-424742/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-121621

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-121621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121621"

                                                
                                                
----------------------- debugLogs end: cilium-121621 [took: 3.713761784s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-121621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-121621
--- SKIP: TestNetworkPlugins/group/cilium (3.87s)

                                                
                                    
Copied to clipboard