Test Report: Docker_Linux_crio_arm64 19312

                    
                      c58167e77f3b0efe0c3c561ff8e0552b34c41906:2024-07-22:35447
                    
                

Test fail (8/336)

x
+
TestAddons/parallel/Ingress (153.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-783853 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-783853 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-783853 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [43421447-2511-42de-b5d2-97ec4232f97f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [43421447-2511-42de-b5d2-97ec4232f97f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003378091s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-783853 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.18694413s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-783853 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-783853 addons disable ingress --alsologtostderr -v=1: (7.830752658s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-783853
helpers_test.go:235: (dbg) docker inspect addons-783853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d",
	        "Created": "2024-07-22T00:27:34.861807245Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 533726,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-22T00:27:34.997655166Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2c91a2178aa1acdb3eade350c62303b0cf135b362b91c6aa21cd060c2dbfcac",
	        "ResolvConfPath": "/var/lib/docker/containers/4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d/hostname",
	        "HostsPath": "/var/lib/docker/containers/4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d/hosts",
	        "LogPath": "/var/lib/docker/containers/4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d/4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d-json.log",
	        "Name": "/addons-783853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-783853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-783853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9488449c619f1392ba3b0b1c7a2d4ec41bf726d2377d30379afac14f034b69a5-init/diff:/var/lib/docker/overlay2/0bbbe9537bb983273c69d2396c833f2bdeab0de0333f7a8438fa8a8aec393d0a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9488449c619f1392ba3b0b1c7a2d4ec41bf726d2377d30379afac14f034b69a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9488449c619f1392ba3b0b1c7a2d4ec41bf726d2377d30379afac14f034b69a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9488449c619f1392ba3b0b1c7a2d4ec41bf726d2377d30379afac14f034b69a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-783853",
	                "Source": "/var/lib/docker/volumes/addons-783853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-783853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-783853",
	                "name.minikube.sigs.k8s.io": "addons-783853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8c00c57964e088368b55ff0c9061679f484e7ceb21197bf8a5b1c4c0f9dd914",
	            "SandboxKey": "/var/run/docker/netns/d8c00c57964e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38981"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38982"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38985"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38983"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38984"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-783853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bd40498e70f76a9ad7520c6de89a05a6866dcab232044897434d10ec91edbae9",
	                    "EndpointID": "8baad34fd63d857c6d2a1bbcc0ee2d8097c64494a34bec2f90585656525c594c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-783853",
	                        "4abbb53d7e22"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-783853 -n addons-783853
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-783853 logs -n 25: (1.444557814s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-177991                                                                     | download-only-177991   | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:27 UTC |
	| delete  | -p download-only-899574                                                                     | download-only-899574   | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:27 UTC |
	| delete  | -p download-only-182209                                                                     | download-only-182209   | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:27 UTC |
	| delete  | -p download-only-177991                                                                     | download-only-177991   | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:27 UTC |
	| start   | --download-only -p                                                                          | download-docker-688994 | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC |                     |
	|         | download-docker-688994                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-688994                                                                   | download-docker-688994 | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-175978   | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC |                     |
	|         | binary-mirror-175978                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32849                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-175978                                                                     | binary-mirror-175978   | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:27 UTC |
	| addons  | disable dashboard -p                                                                        | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC |                     |
	|         | addons-783853                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC |                     |
	|         | addons-783853                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-783853 --wait=true                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	|         | -p addons-783853                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-783853 ip                                                                            | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	| addons  | addons-783853 addons disable                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	|         | -p addons-783853                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-783853 ssh cat                                                                       | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	|         | /opt/local-path-provisioner/pvc-a10fb3fc-c913-4254-9002-57f08ecaf0f2_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-783853 addons disable                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	|         | addons-783853                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:32 UTC | 22 Jul 24 00:32 UTC |
	|         | addons-783853                                                                               |                        |         |         |                     |                     |
	| addons  | addons-783853 addons                                                                        | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:32 UTC | 22 Jul 24 00:32 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-783853 addons                                                                        | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:32 UTC | 22 Jul 24 00:32 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-783853 ssh curl -s                                                                   | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:32 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-783853 ip                                                                            | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC | 22 Jul 24 00:34 UTC |
	| addons  | addons-783853 addons disable                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC | 22 Jul 24 00:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-783853 addons disable                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC | 22 Jul 24 00:35 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:27:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:27:10.778649  533196 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:27:10.778848  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:27:10.778877  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:27:10.778897  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:27:10.779179  533196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 00:27:10.779639  533196 out.go:298] Setting JSON to false
	I0722 00:27:10.780576  533196 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":115782,"bootTime":1721492249,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0722 00:27:10.780673  533196 start.go:139] virtualization:  
	I0722 00:27:10.783124  533196 out.go:177] * [addons-783853] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0722 00:27:10.785451  533196 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:27:10.785522  533196 notify.go:220] Checking for updates...
	I0722 00:27:10.789138  533196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:27:10.790883  533196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	I0722 00:27:10.793729  533196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	I0722 00:27:10.795743  533196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0722 00:27:10.797763  533196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:27:10.799828  533196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:27:10.828132  533196 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0722 00:27:10.828247  533196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:27:10.878889  533196 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-22 00:27:10.869681834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:27:10.879003  533196 docker.go:307] overlay module found
	I0722 00:27:10.880841  533196 out.go:177] * Using the docker driver based on user configuration
	I0722 00:27:10.882389  533196 start.go:297] selected driver: docker
	I0722 00:27:10.882408  533196 start.go:901] validating driver "docker" against <nil>
	I0722 00:27:10.882434  533196 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:27:10.883068  533196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:27:10.945570  533196 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-22 00:27:10.936150292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:27:10.945745  533196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 00:27:10.945995  533196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:27:10.948035  533196 out.go:177] * Using Docker driver with root privileges
	I0722 00:27:10.949663  533196 cni.go:84] Creating CNI manager for ""
	I0722 00:27:10.949683  533196 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0722 00:27:10.949700  533196 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 00:27:10.949834  533196 start.go:340] cluster config:
	{Name:addons-783853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-783853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:27:10.951869  533196 out.go:177] * Starting "addons-783853" primary control-plane node in "addons-783853" cluster
	I0722 00:27:10.953469  533196 cache.go:121] Beginning downloading kic base image for docker with crio
	I0722 00:27:10.955063  533196 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0722 00:27:10.956561  533196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:27:10.956612  533196 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0722 00:27:10.956625  533196 cache.go:56] Caching tarball of preloaded images
	I0722 00:27:10.956710  533196 preload.go:172] Found /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0722 00:27:10.956724  533196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:27:10.957130  533196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/config.json ...
	I0722 00:27:10.957164  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/config.json: {Name:mkec22b347b3f4f8439a05f8b676bc43b45a69f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:10.957331  533196 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0722 00:27:10.971575  533196 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0722 00:27:10.971705  533196 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0722 00:27:10.971728  533196 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0722 00:27:10.971736  533196 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0722 00:27:10.971744  533196 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0722 00:27:10.971752  533196 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0722 00:27:27.662993  533196 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0722 00:27:27.663034  533196 cache.go:194] Successfully downloaded all kic artifacts
	I0722 00:27:27.663085  533196 start.go:360] acquireMachinesLock for addons-783853: {Name:mk23ed81c9ab4a4da7fcd8d2ab7dd25d44ee9926 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:27:27.663787  533196 start.go:364] duration metric: took 674.922µs to acquireMachinesLock for "addons-783853"
	I0722 00:27:27.663825  533196 start.go:93] Provisioning new machine with config: &{Name:addons-783853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-783853 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:27:27.663912  533196 start.go:125] createHost starting for "" (driver="docker")
	I0722 00:27:27.666210  533196 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0722 00:27:27.666444  533196 start.go:159] libmachine.API.Create for "addons-783853" (driver="docker")
	I0722 00:27:27.666477  533196 client.go:168] LocalClient.Create starting
	I0722 00:27:27.666595  533196 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem
	I0722 00:27:28.087092  533196 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/cert.pem
	I0722 00:27:28.332499  533196 cli_runner.go:164] Run: docker network inspect addons-783853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0722 00:27:28.348801  533196 cli_runner.go:211] docker network inspect addons-783853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0722 00:27:28.348889  533196 network_create.go:284] running [docker network inspect addons-783853] to gather additional debugging logs...
	I0722 00:27:28.348908  533196 cli_runner.go:164] Run: docker network inspect addons-783853
	W0722 00:27:28.362490  533196 cli_runner.go:211] docker network inspect addons-783853 returned with exit code 1
	I0722 00:27:28.362525  533196 network_create.go:287] error running [docker network inspect addons-783853]: docker network inspect addons-783853: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-783853 not found
	I0722 00:27:28.362538  533196 network_create.go:289] output of [docker network inspect addons-783853]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-783853 not found
	
	** /stderr **
	I0722 00:27:28.362634  533196 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0722 00:27:28.376513  533196 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000490170}
	I0722 00:27:28.376554  533196 network_create.go:124] attempt to create docker network addons-783853 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0722 00:27:28.376610  533196 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-783853 addons-783853
	I0722 00:27:28.443047  533196 network_create.go:108] docker network addons-783853 192.168.49.0/24 created
	I0722 00:27:28.443087  533196 kic.go:121] calculated static IP "192.168.49.2" for the "addons-783853" container
	I0722 00:27:28.443160  533196 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0722 00:27:28.458021  533196 cli_runner.go:164] Run: docker volume create addons-783853 --label name.minikube.sigs.k8s.io=addons-783853 --label created_by.minikube.sigs.k8s.io=true
	I0722 00:27:28.474547  533196 oci.go:103] Successfully created a docker volume addons-783853
	I0722 00:27:28.474641  533196 cli_runner.go:164] Run: docker run --rm --name addons-783853-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-783853 --entrypoint /usr/bin/test -v addons-783853:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0722 00:27:30.570394  533196 cli_runner.go:217] Completed: docker run --rm --name addons-783853-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-783853 --entrypoint /usr/bin/test -v addons-783853:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib: (2.095711473s)
	I0722 00:27:30.570428  533196 oci.go:107] Successfully prepared a docker volume addons-783853
	I0722 00:27:30.570444  533196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:27:30.570463  533196 kic.go:194] Starting extracting preloaded images to volume ...
	I0722 00:27:30.570548  533196 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-783853:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0722 00:27:34.799206  533196 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-783853:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir: (4.22861531s)
	I0722 00:27:34.799240  533196 kic.go:203] duration metric: took 4.228773712s to extract preloaded images to volume ...
	W0722 00:27:34.799393  533196 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0722 00:27:34.799512  533196 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0722 00:27:34.847515  533196 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-783853 --name addons-783853 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-783853 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-783853 --network addons-783853 --ip 192.168.49.2 --volume addons-783853:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f
	I0722 00:27:35.172931  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Running}}
	I0722 00:27:35.193276  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:27:35.215373  533196 cli_runner.go:164] Run: docker exec addons-783853 stat /var/lib/dpkg/alternatives/iptables
	I0722 00:27:35.267632  533196 oci.go:144] the created container "addons-783853" has a running status.
	I0722 00:27:35.267660  533196 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa...
	I0722 00:27:35.849542  533196 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0722 00:27:35.873284  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:27:35.902409  533196 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0722 00:27:35.902429  533196 kic_runner.go:114] Args: [docker exec --privileged addons-783853 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0722 00:27:35.974149  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:27:35.999781  533196 machine.go:94] provisionDockerMachine start ...
	I0722 00:27:35.999870  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:36.029004  533196 main.go:141] libmachine: Using SSH client type: native
	I0722 00:27:36.029340  533196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38981 <nil> <nil>}
	I0722 00:27:36.029352  533196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:27:36.188478  533196 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-783853
	
	I0722 00:27:36.188501  533196 ubuntu.go:169] provisioning hostname "addons-783853"
	I0722 00:27:36.188569  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:36.211903  533196 main.go:141] libmachine: Using SSH client type: native
	I0722 00:27:36.212172  533196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38981 <nil> <nil>}
	I0722 00:27:36.212185  533196 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-783853 && echo "addons-783853" | sudo tee /etc/hostname
	I0722 00:27:36.356251  533196 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-783853
	
	I0722 00:27:36.356330  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:36.372898  533196 main.go:141] libmachine: Using SSH client type: native
	I0722 00:27:36.373143  533196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38981 <nil> <nil>}
	I0722 00:27:36.373159  533196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-783853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-783853/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-783853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:27:36.492627  533196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:27:36.492655  533196 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19312-526659/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-526659/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-526659/.minikube}
	I0722 00:27:36.492680  533196 ubuntu.go:177] setting up certificates
	I0722 00:27:36.492693  533196 provision.go:84] configureAuth start
	I0722 00:27:36.492781  533196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-783853
	I0722 00:27:36.509920  533196 provision.go:143] copyHostCerts
	I0722 00:27:36.510081  533196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-526659/.minikube/cert.pem (1123 bytes)
	I0722 00:27:36.510210  533196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-526659/.minikube/key.pem (1675 bytes)
	I0722 00:27:36.510278  533196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-526659/.minikube/ca.pem (1078 bytes)
	I0722 00:27:36.510331  533196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-526659/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca-key.pem org=jenkins.addons-783853 san=[127.0.0.1 192.168.49.2 addons-783853 localhost minikube]
	I0722 00:27:36.717226  533196 provision.go:177] copyRemoteCerts
	I0722 00:27:36.717314  533196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:27:36.717361  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:36.733461  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:27:36.821548  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 00:27:36.847624  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 00:27:36.872428  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:27:36.898898  533196 provision.go:87] duration metric: took 406.191296ms to configureAuth
	I0722 00:27:36.898925  533196 ubuntu.go:193] setting minikube options for container-runtime
	I0722 00:27:36.899109  533196 config.go:182] Loaded profile config "addons-783853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:27:36.899226  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:36.916948  533196 main.go:141] libmachine: Using SSH client type: native
	I0722 00:27:36.917218  533196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38981 <nil> <nil>}
	I0722 00:27:36.917239  533196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:27:37.136697  533196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:27:37.136718  533196 machine.go:97] duration metric: took 1.136918196s to provisionDockerMachine
	I0722 00:27:37.136834  533196 client.go:171] duration metric: took 9.470245646s to LocalClient.Create
	I0722 00:27:37.136850  533196 start.go:167] duration metric: took 9.470405451s to libmachine.API.Create "addons-783853"
	I0722 00:27:37.136857  533196 start.go:293] postStartSetup for "addons-783853" (driver="docker")
	I0722 00:27:37.136868  533196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:27:37.136936  533196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:27:37.136982  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:37.154774  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:27:37.245763  533196 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:27:37.248889  533196 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0722 00:27:37.248958  533196 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0722 00:27:37.248974  533196 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0722 00:27:37.248982  533196 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0722 00:27:37.249008  533196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-526659/.minikube/addons for local assets ...
	I0722 00:27:37.249095  533196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-526659/.minikube/files for local assets ...
	I0722 00:27:37.249153  533196 start.go:296] duration metric: took 112.290287ms for postStartSetup
	I0722 00:27:37.249483  533196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-783853
	I0722 00:27:37.265523  533196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/config.json ...
	I0722 00:27:37.265816  533196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 00:27:37.265872  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:37.282055  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:27:37.369546  533196 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0722 00:27:37.373949  533196 start.go:128] duration metric: took 9.710019711s to createHost
	I0722 00:27:37.373973  533196 start.go:83] releasing machines lock for "addons-783853", held for 9.710168899s
	I0722 00:27:37.374069  533196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-783853
	I0722 00:27:37.389723  533196 ssh_runner.go:195] Run: cat /version.json
	I0722 00:27:37.389737  533196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:27:37.389780  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:37.389801  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:37.407064  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:27:37.414085  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:27:37.622495  533196 ssh_runner.go:195] Run: systemctl --version
	I0722 00:27:37.626908  533196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:27:37.767094  533196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 00:27:37.771352  533196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:27:37.791178  533196 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0722 00:27:37.791291  533196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:27:37.824031  533196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0722 00:27:37.824056  533196 start.go:495] detecting cgroup driver to use...
	I0722 00:27:37.824108  533196 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0722 00:27:37.824165  533196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:27:37.839861  533196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:27:37.851948  533196 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:27:37.852056  533196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:27:37.866175  533196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:27:37.881025  533196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:27:37.970821  533196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:27:38.065566  533196 docker.go:233] disabling docker service ...
	I0722 00:27:38.065643  533196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:27:38.086531  533196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:27:38.098685  533196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:27:38.186931  533196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:27:38.283605  533196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:27:38.295280  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:27:38.312062  533196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:27:38.312177  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.322414  533196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:27:38.322516  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.332332  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.342315  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.352036  533196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:27:38.361966  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.371803  533196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.388074  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.399220  533196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:27:38.407632  533196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:27:38.416060  533196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:27:38.494242  533196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:27:38.605136  533196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:27:38.605234  533196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:27:38.608783  533196 start.go:563] Will wait 60s for crictl version
	I0722 00:27:38.608863  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:27:38.612050  533196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:27:38.648157  533196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0722 00:27:38.648289  533196 ssh_runner.go:195] Run: crio --version
	I0722 00:27:38.687573  533196 ssh_runner.go:195] Run: crio --version
	I0722 00:27:38.731283  533196 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0722 00:27:38.733252  533196 cli_runner.go:164] Run: docker network inspect addons-783853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0722 00:27:38.749343  533196 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0722 00:27:38.752787  533196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:27:38.763268  533196 kubeadm.go:883] updating cluster {Name:addons-783853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-783853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:27:38.763397  533196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:27:38.763462  533196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:27:38.839559  533196 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:27:38.839581  533196 crio.go:433] Images already preloaded, skipping extraction
	I0722 00:27:38.839648  533196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:27:38.875020  533196 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:27:38.875046  533196 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:27:38.875056  533196 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0722 00:27:38.875157  533196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-783853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-783853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:27:38.875239  533196 ssh_runner.go:195] Run: crio config
	I0722 00:27:38.928648  533196 cni.go:84] Creating CNI manager for ""
	I0722 00:27:38.928679  533196 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0722 00:27:38.928691  533196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:27:38.928715  533196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-783853 NodeName:addons-783853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:27:38.928907  533196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-783853"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:27:38.928990  533196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:27:38.938002  533196 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:27:38.938077  533196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:27:38.946956  533196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0722 00:27:38.966495  533196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:27:38.985031  533196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0722 00:27:39.005010  533196 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0722 00:27:39.010874  533196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:27:39.022265  533196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:27:39.104848  533196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:27:39.119132  533196 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853 for IP: 192.168.49.2
	I0722 00:27:39.119196  533196 certs.go:194] generating shared ca certs ...
	I0722 00:27:39.119226  533196 certs.go:226] acquiring lock for ca certs: {Name:mkdc7fe7e192116c10cb8e16455129169d01b878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:39.119393  533196 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-526659/.minikube/ca.key
	I0722 00:27:39.476055  533196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-526659/.minikube/ca.crt ...
	I0722 00:27:39.476131  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/ca.crt: {Name:mkb33f73b23802ede958554614e4b008c48b2f10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:39.476357  533196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-526659/.minikube/ca.key ...
	I0722 00:27:39.476390  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/ca.key: {Name:mk1fd7d6078677bea533048c8859053762632ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:39.476937  533196 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.key
	I0722 00:27:39.974794  533196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.crt ...
	I0722 00:27:39.974865  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.crt: {Name:mkb189d123154e6025a41e754cb075267d1419d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:39.975693  533196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.key ...
	I0722 00:27:39.975743  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.key: {Name:mkd77b44cda8132180ad1a361631e311da024968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:39.976789  533196 certs.go:256] generating profile certs ...
	I0722 00:27:39.976871  533196 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.key
	I0722 00:27:39.976892  533196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt with IP's: []
	I0722 00:27:40.140302  533196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt ...
	I0722 00:27:40.140335  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: {Name:mkb7266298024a27cdd2f72065f78f1a4a0e8164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.140997  533196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.key ...
	I0722 00:27:40.141017  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.key: {Name:mk24a76981df5d3ce591084fd9ee6d4a6b9c8150 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.141103  533196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key.3ff2709b
	I0722 00:27:40.141119  533196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt.3ff2709b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0722 00:27:40.452574  533196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt.3ff2709b ...
	I0722 00:27:40.452603  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt.3ff2709b: {Name:mkbb85f86f96dbff49a24a33699cbb07a9206e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.453231  533196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key.3ff2709b ...
	I0722 00:27:40.453252  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key.3ff2709b: {Name:mke0a6723860c1ff374f83d466535ba261d09ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.453352  533196 certs.go:381] copying /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt.3ff2709b -> /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt
	I0722 00:27:40.453432  533196 certs.go:385] copying /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key.3ff2709b -> /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key
	I0722 00:27:40.453487  533196 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.key
	I0722 00:27:40.453506  533196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.crt with IP's: []
	I0722 00:27:40.677557  533196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.crt ...
	I0722 00:27:40.677635  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.crt: {Name:mk5fdd9829212620bde6a507271dd05648de3b22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.677910  533196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.key ...
	I0722 00:27:40.677947  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.key: {Name:mk925f703c0b5e1260489129120df520eb854e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.678262  533196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 00:27:40.678340  533196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem (1078 bytes)
	I0722 00:27:40.678411  533196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:27:40.678460  533196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/key.pem (1675 bytes)
	I0722 00:27:40.679198  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:27:40.710743  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 00:27:40.744322  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:27:40.770269  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:27:40.795634  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0722 00:27:40.818926  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:27:40.842636  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:27:40.866479  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:27:40.889805  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:27:40.913572  533196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:27:40.932012  533196 ssh_runner.go:195] Run: openssl version
	I0722 00:27:40.937804  533196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:27:40.947545  533196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:27:40.950885  533196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:27:40.950957  533196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:27:40.958046  533196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:27:40.967250  533196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:27:40.970460  533196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 00:27:40.970519  533196 kubeadm.go:392] StartCluster: {Name:addons-783853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-783853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:27:40.970605  533196 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:27:40.970662  533196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:27:41.011156  533196 cri.go:89] found id: ""
	I0722 00:27:41.011232  533196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:27:41.020304  533196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:27:41.029320  533196 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0722 00:27:41.029385  533196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:27:41.041300  533196 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:27:41.041371  533196 kubeadm.go:157] found existing configuration files:
	
	I0722 00:27:41.041458  533196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:27:41.050831  533196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:27:41.050945  533196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:27:41.059287  533196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:27:41.068191  533196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:27:41.068287  533196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:27:41.077204  533196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:27:41.086360  533196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:27:41.086457  533196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:27:41.095146  533196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:27:41.104046  533196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:27:41.104131  533196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:27:41.112598  533196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0722 00:27:41.198517  533196 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1064-aws\n", err: exit status 1
	I0722 00:27:41.270716  533196 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:27:56.830932  533196 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:27:56.830992  533196 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:27:56.831083  533196 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0722 00:27:56.831160  533196 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1064-aws
	I0722 00:27:56.831200  533196 kubeadm.go:310] OS: Linux
	I0722 00:27:56.831251  533196 kubeadm.go:310] CGROUPS_CPU: enabled
	I0722 00:27:56.831309  533196 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0722 00:27:56.831355  533196 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0722 00:27:56.831402  533196 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0722 00:27:56.831449  533196 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0722 00:27:56.831496  533196 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0722 00:27:56.831540  533196 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0722 00:27:56.831587  533196 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0722 00:27:56.831633  533196 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0722 00:27:56.831703  533196 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:27:56.831796  533196 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:27:56.831887  533196 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:27:56.831950  533196 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:27:56.833999  533196 out.go:204]   - Generating certificates and keys ...
	I0722 00:27:56.834094  533196 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:27:56.834163  533196 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:27:56.834231  533196 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 00:27:56.834290  533196 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 00:27:56.834353  533196 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 00:27:56.834405  533196 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 00:27:56.834460  533196 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 00:27:56.834576  533196 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-783853 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0722 00:27:56.834632  533196 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 00:27:56.834754  533196 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-783853 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0722 00:27:56.834822  533196 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 00:27:56.834887  533196 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 00:27:56.834934  533196 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 00:27:56.834991  533196 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:27:56.835044  533196 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:27:56.835103  533196 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:27:56.835160  533196 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:27:56.835225  533196 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:27:56.835286  533196 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:27:56.835369  533196 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:27:56.835441  533196 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:27:56.837105  533196 out.go:204]   - Booting up control plane ...
	I0722 00:27:56.837238  533196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:27:56.837369  533196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:27:56.837454  533196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:27:56.837573  533196 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:27:56.837664  533196 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:27:56.837709  533196 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:27:56.837842  533196 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:27:56.837915  533196 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:27:56.837976  533196 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501628622s
	I0722 00:27:56.838052  533196 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:27:56.838113  533196 kubeadm.go:310] [api-check] The API server is healthy after 6.001737803s
	I0722 00:27:56.838217  533196 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:27:56.838339  533196 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:27:56.838399  533196 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:27:56.838575  533196 kubeadm.go:310] [mark-control-plane] Marking the node addons-783853 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:27:56.838632  533196 kubeadm.go:310] [bootstrap-token] Using token: e7b4i5.vbym5s3kk6kc87y8
	I0722 00:27:56.840381  533196 out.go:204]   - Configuring RBAC rules ...
	I0722 00:27:56.840493  533196 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:27:56.840582  533196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:27:56.840722  533196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:27:56.840901  533196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:27:56.841025  533196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:27:56.841113  533196 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:27:56.841228  533196 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:27:56.841279  533196 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:27:56.841332  533196 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:27:56.841340  533196 kubeadm.go:310] 
	I0722 00:27:56.841398  533196 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:27:56.841406  533196 kubeadm.go:310] 
	I0722 00:27:56.841480  533196 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:27:56.841489  533196 kubeadm.go:310] 
	I0722 00:27:56.841513  533196 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:27:56.841573  533196 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:27:56.841627  533196 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:27:56.841631  533196 kubeadm.go:310] 
	I0722 00:27:56.841683  533196 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:27:56.841690  533196 kubeadm.go:310] 
	I0722 00:27:56.841736  533196 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:27:56.841743  533196 kubeadm.go:310] 
	I0722 00:27:56.841794  533196 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:27:56.841868  533196 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:27:56.841936  533196 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:27:56.841943  533196 kubeadm.go:310] 
	I0722 00:27:56.842030  533196 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:27:56.842108  533196 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:27:56.842115  533196 kubeadm.go:310] 
	I0722 00:27:56.842196  533196 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e7b4i5.vbym5s3kk6kc87y8 \
	I0722 00:27:56.842299  533196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7164c6734272d868157842346e8690c5e25f90de83e5fe6d168aaf43b24e1417 \
	I0722 00:27:56.842322  533196 kubeadm.go:310] 	--control-plane 
	I0722 00:27:56.842336  533196 kubeadm.go:310] 
	I0722 00:27:56.842417  533196 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:27:56.842424  533196 kubeadm.go:310] 
	I0722 00:27:56.842503  533196 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e7b4i5.vbym5s3kk6kc87y8 \
	I0722 00:27:56.842618  533196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7164c6734272d868157842346e8690c5e25f90de83e5fe6d168aaf43b24e1417 
	I0722 00:27:56.842630  533196 cni.go:84] Creating CNI manager for ""
	I0722 00:27:56.842638  533196 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0722 00:27:56.844482  533196 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0722 00:27:56.846195  533196 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0722 00:27:56.850824  533196 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0722 00:27:56.850845  533196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0722 00:27:56.870191  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0722 00:27:57.177431  533196 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:27:57.177570  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:57.177654  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-783853 minikube.k8s.io/updated_at=2024_07_22T00_27_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=addons-783853 minikube.k8s.io/primary=true
	I0722 00:27:57.327652  533196 ops.go:34] apiserver oom_adj: -16
	I0722 00:27:57.327770  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:57.828433  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:58.328562  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:58.827890  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:59.328555  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:59.827926  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:00.328588  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:00.828109  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:01.328865  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:01.828851  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:02.328684  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:02.827921  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:03.327955  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:03.828201  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:04.328865  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:04.828616  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:05.328859  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:05.828403  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:06.328500  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:06.827924  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:07.327974  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:07.827958  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:08.328403  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:08.827874  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:09.328377  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:09.828355  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:09.942815  533196 kubeadm.go:1113] duration metric: took 12.765292953s to wait for elevateKubeSystemPrivileges
	I0722 00:28:09.942849  533196 kubeadm.go:394] duration metric: took 28.972333809s to StartCluster
	I0722 00:28:09.942867  533196 settings.go:142] acquiring lock: {Name:mk10d2325078b8f55c71d679c871958034fe6b22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:28:09.943529  533196 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-526659/kubeconfig
	I0722 00:28:09.943920  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/kubeconfig: {Name:mk85dda85ca5bc25fe23397cf817bcf2d3bbdbc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:28:09.944578  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0722 00:28:09.944601  533196 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:28:09.944893  533196 config.go:182] Loaded profile config "addons-783853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:28:09.945005  533196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0722 00:28:09.945088  533196 addons.go:69] Setting yakd=true in profile "addons-783853"
	I0722 00:28:09.945109  533196 addons.go:234] Setting addon yakd=true in "addons-783853"
	I0722 00:28:09.945133  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.945619  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.946123  533196 addons.go:69] Setting cloud-spanner=true in profile "addons-783853"
	I0722 00:28:09.946134  533196 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-783853"
	I0722 00:28:09.946152  533196 addons.go:234] Setting addon cloud-spanner=true in "addons-783853"
	I0722 00:28:09.946178  533196 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-783853"
	I0722 00:28:09.946184  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.946200  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.946573  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.946626  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.946127  533196 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-783853"
	I0722 00:28:09.948993  533196 addons.go:69] Setting default-storageclass=true in profile "addons-783853"
	I0722 00:28:09.949039  533196 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-783853"
	I0722 00:28:09.949151  533196 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-783853"
	I0722 00:28:09.949256  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.949321  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.956856  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949327  533196 addons.go:69] Setting gcp-auth=true in profile "addons-783853"
	I0722 00:28:09.957272  533196 mustload.go:65] Loading cluster: addons-783853
	I0722 00:28:09.957493  533196 config.go:182] Loaded profile config "addons-783853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:28:09.957789  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949336  533196 addons.go:69] Setting ingress=true in profile "addons-783853"
	I0722 00:28:09.968689  533196 addons.go:234] Setting addon ingress=true in "addons-783853"
	I0722 00:28:09.968801  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.969309  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949341  533196 addons.go:69] Setting ingress-dns=true in profile "addons-783853"
	I0722 00:28:09.977363  533196 addons.go:234] Setting addon ingress-dns=true in "addons-783853"
	I0722 00:28:09.977450  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.978395  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949345  533196 addons.go:69] Setting inspektor-gadget=true in profile "addons-783853"
	I0722 00:28:09.987800  533196 addons.go:234] Setting addon inspektor-gadget=true in "addons-783853"
	I0722 00:28:09.987865  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.992650  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949348  533196 addons.go:69] Setting metrics-server=true in profile "addons-783853"
	I0722 00:28:10.008028  533196 addons.go:234] Setting addon metrics-server=true in "addons-783853"
	I0722 00:28:10.008104  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.008642  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949355  533196 out.go:177] * Verifying Kubernetes components...
	I0722 00:28:09.949374  533196 addons.go:69] Setting registry=true in profile "addons-783853"
	I0722 00:28:10.047210  533196 addons.go:234] Setting addon registry=true in "addons-783853"
	I0722 00:28:10.047288  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.047814  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:10.066054  533196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:28:09.949388  533196 addons.go:69] Setting storage-provisioner=true in profile "addons-783853"
	I0722 00:28:10.072533  533196 addons.go:234] Setting addon storage-provisioner=true in "addons-783853"
	I0722 00:28:10.072603  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.080892  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949396  533196 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-783853"
	I0722 00:28:10.091613  533196 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-783853"
	I0722 00:28:10.091959  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949402  533196 addons.go:69] Setting volcano=true in profile "addons-783853"
	I0722 00:28:10.102616  533196 addons.go:234] Setting addon volcano=true in "addons-783853"
	I0722 00:28:10.102663  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.103375  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949408  533196 addons.go:69] Setting volumesnapshots=true in profile "addons-783853"
	I0722 00:28:10.121151  533196 addons.go:234] Setting addon volumesnapshots=true in "addons-783853"
	I0722 00:28:10.121194  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.139330  533196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0722 00:28:10.145101  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:10.149961  533196 addons.go:234] Setting addon default-storageclass=true in "addons-783853"
	I0722 00:28:10.154249  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.154796  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:10.167489  533196 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0722 00:28:10.167673  533196 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0722 00:28:10.172812  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0722 00:28:10.174362  533196 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0722 00:28:10.174382  533196 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0722 00:28:10.175070  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.185772  533196 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0722 00:28:10.185851  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0722 00:28:10.185946  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.201468  533196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 00:28:10.204533  533196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 00:28:10.206749  533196 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0722 00:28:10.206772  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0722 00:28:10.206838  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.213513  533196 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0722 00:28:10.219988  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0722 00:28:10.221579  533196 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0722 00:28:10.221602  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0722 00:28:10.221672  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.225040  533196 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0722 00:28:10.226740  533196 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:28:10.226762  533196 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:28:10.226834  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.240019  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.270874  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0722 00:28:10.272620  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0722 00:28:10.274555  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0722 00:28:10.276123  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0722 00:28:10.279058  533196 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0722 00:28:10.282491  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0722 00:28:10.283355  533196 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-783853"
	I0722 00:28:10.283395  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.283786  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:10.288894  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0722 00:28:10.292651  533196 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0722 00:28:10.294230  533196 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0722 00:28:10.294269  533196 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0722 00:28:10.294356  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.300930  533196 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0722 00:28:10.300955  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0722 00:28:10.301020  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.317976  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0722 00:28:10.318017  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0722 00:28:10.318086  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.328471  533196 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:28:10.331177  533196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:28:10.331201  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:28:10.331279  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.363305  533196 out.go:177]   - Using image docker.io/registry:2.8.3
	I0722 00:28:10.363754  533196 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:28:10.363771  533196 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:28:10.363831  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.367102  533196 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0722 00:28:10.368842  533196 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0722 00:28:10.368864  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0722 00:28:10.368947  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	W0722 00:28:10.384021  533196 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0722 00:28:10.412856  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.423909  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0722 00:28:10.428924  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.429299  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0722 00:28:10.429420  533196 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0722 00:28:10.432808  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.443949  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.474286  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.520621  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0722 00:28:10.532059  533196 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0722 00:28:10.532194  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.536614  533196 out.go:177]   - Using image docker.io/busybox:stable
	I0722 00:28:10.540353  533196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0722 00:28:10.540375  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0722 00:28:10.540444  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.560857  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.569014  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.571150  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.595469  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.601293  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.603702  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.605458  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.633275  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.762580  533196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:28:10.834112  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0722 00:28:10.921668  533196 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0722 00:28:10.921694  533196 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0722 00:28:11.015968  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0722 00:28:11.038003  533196 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0722 00:28:11.038031  533196 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0722 00:28:11.078449  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0722 00:28:11.088188  533196 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0722 00:28:11.088212  533196 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0722 00:28:11.117628  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0722 00:28:11.135668  533196 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0722 00:28:11.135695  533196 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0722 00:28:11.151046  533196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:28:11.151074  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0722 00:28:11.153753  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:28:11.161879  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:28:11.171077  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0722 00:28:11.171149  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0722 00:28:11.173327  533196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0722 00:28:11.173395  533196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0722 00:28:11.232562  533196 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0722 00:28:11.232633  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0722 00:28:11.249787  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0722 00:28:11.288114  533196 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0722 00:28:11.288192  533196 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0722 00:28:11.330466  533196 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0722 00:28:11.330545  533196 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0722 00:28:11.355532  533196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:28:11.355618  533196 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:28:11.355698  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0722 00:28:11.355737  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0722 00:28:11.379823  533196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0722 00:28:11.379896  533196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0722 00:28:11.415226  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0722 00:28:11.492295  533196 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0722 00:28:11.492365  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0722 00:28:11.522833  533196 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0722 00:28:11.522910  533196 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0722 00:28:11.527923  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0722 00:28:11.528000  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0722 00:28:11.530553  533196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0722 00:28:11.530637  533196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0722 00:28:11.569355  533196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:28:11.569431  533196 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:28:11.659694  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0722 00:28:11.692467  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0722 00:28:11.692547  533196 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0722 00:28:11.696861  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0722 00:28:11.696932  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0722 00:28:11.721512  533196 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0722 00:28:11.721593  533196 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0722 00:28:11.735973  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:28:11.828634  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0722 00:28:11.828711  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0722 00:28:11.843231  533196 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 00:28:11.843301  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0722 00:28:11.867473  533196 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0722 00:28:11.867547  533196 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0722 00:28:11.943673  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0722 00:28:11.943746  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0722 00:28:11.947920  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 00:28:11.991152  533196 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0722 00:28:11.991226  533196 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0722 00:28:12.095027  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0722 00:28:12.095103  533196 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0722 00:28:12.127275  533196 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0722 00:28:12.127346  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0722 00:28:12.196257  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0722 00:28:12.196329  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0722 00:28:12.213698  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0722 00:28:12.247278  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0722 00:28:12.247348  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0722 00:28:12.321434  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0722 00:28:12.321516  533196 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0722 00:28:12.482819  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0722 00:28:12.859679  533196 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.339020713s)
	I0722 00:28:12.859755  533196 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0722 00:28:12.860958  533196 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.098310995s)
	I0722 00:28:12.862017  533196 node_ready.go:35] waiting up to 6m0s for node "addons-783853" to be "Ready" ...
	I0722 00:28:13.926565  533196 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-783853" context rescaled to 1 replicas
	I0722 00:28:15.064945  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.23075509s)
	I0722 00:28:15.198441  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:17.147061  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.131053227s)
	I0722 00:28:17.147093  533196 addons.go:475] Verifying addon ingress=true in "addons-783853"
	I0722 00:28:17.147282  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.068805958s)
	I0722 00:28:17.147328  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.029676603s)
	I0722 00:28:17.147352  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.993578731s)
	I0722 00:28:17.147535  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.985634195s)
	I0722 00:28:17.147579  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.897774226s)
	I0722 00:28:17.147662  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.732368904s)
	I0722 00:28:17.147672  533196 addons.go:475] Verifying addon registry=true in "addons-783853"
	I0722 00:28:17.148036  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.488265501s)
	I0722 00:28:17.148201  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.412146777s)
	I0722 00:28:17.148218  533196 addons.go:475] Verifying addon metrics-server=true in "addons-783853"
	I0722 00:28:17.148326  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.200334299s)
	W0722 00:28:17.148345  533196 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0722 00:28:17.148362  533196 retry.go:31] will retry after 303.438114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0722 00:28:17.148517  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.934737239s)
	I0722 00:28:17.149784  533196 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-783853 service yakd-dashboard -n yakd-dashboard
	
	I0722 00:28:17.149892  533196 out.go:177] * Verifying registry addon...
	I0722 00:28:17.149914  533196 out.go:177] * Verifying ingress addon...
	I0722 00:28:17.153061  533196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0722 00:28:17.153965  533196 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0722 00:28:17.192619  533196 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0722 00:28:17.192710  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:17.199033  533196 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0722 00:28:17.199103  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0722 00:28:17.207025  533196 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0722 00:28:17.378476  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:17.451966  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 00:28:17.687896  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:17.707786  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:17.816903  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.333986263s)
	I0722 00:28:17.816943  533196 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-783853"
	I0722 00:28:17.818983  533196 out.go:177] * Verifying csi-hostpath-driver addon...
	I0722 00:28:17.821583  533196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0722 00:28:17.842303  533196 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0722 00:28:17.842330  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:18.173682  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:18.174901  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:18.354261  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:18.659281  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:18.659912  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:18.743315  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.291302097s)
	I0722 00:28:18.751449  533196 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0722 00:28:18.751537  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:18.775136  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:18.825974  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:18.899273  533196 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0722 00:28:18.934227  533196 addons.go:234] Setting addon gcp-auth=true in "addons-783853"
	I0722 00:28:18.934334  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:18.934855  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:18.957613  533196 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0722 00:28:18.957665  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:18.980675  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:19.086859  533196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 00:28:19.088833  533196 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0722 00:28:19.090589  533196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0722 00:28:19.090611  533196 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0722 00:28:19.125707  533196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0722 00:28:19.125729  533196 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0722 00:28:19.145688  533196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0722 00:28:19.145763  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0722 00:28:19.160216  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:19.161295  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:19.173883  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0722 00:28:19.327341  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:19.659090  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:19.680058  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:19.853822  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:19.881026  533196 addons.go:475] Verifying addon gcp-auth=true in "addons-783853"
	I0722 00:28:19.883157  533196 out.go:177] * Verifying gcp-auth addon...
	I0722 00:28:19.885759  533196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0722 00:28:19.893260  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:19.903588  533196 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0722 00:28:19.903661  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:20.167384  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:20.168881  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:20.326471  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:20.389797  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:20.658233  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:20.659180  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:20.826170  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:20.889712  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:21.158773  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:21.158876  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:21.326308  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:21.388824  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:21.659215  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:21.660448  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:21.825641  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:21.889312  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:22.157359  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:22.158410  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:22.326534  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:22.365686  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:22.390200  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:22.663252  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:22.664490  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:22.826774  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:22.890749  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:23.157896  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:23.158749  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:23.325895  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:23.390144  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:23.658045  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:23.660190  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:23.826380  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:23.889643  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:24.157503  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:24.159236  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:24.326205  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:24.389290  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:24.657710  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:24.658385  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:24.826608  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:24.865828  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:24.889511  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:25.158505  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:25.159155  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:25.326048  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:25.390585  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:25.657380  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:25.658164  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:25.826417  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:25.889743  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:26.158505  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:26.158541  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:26.325582  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:26.389824  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:26.657679  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:26.658806  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:26.826073  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:26.889399  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:27.158925  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:27.159148  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:27.326116  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:27.365407  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:27.389318  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:27.657839  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:27.658432  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:27.826796  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:27.888974  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:28.158250  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:28.159087  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:28.326008  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:28.389246  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:28.657731  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:28.660377  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:28.825453  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:28.889819  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:29.158290  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:29.158601  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:29.326145  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:29.390039  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:29.658186  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:29.659100  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:29.826136  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:29.865603  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:29.889714  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:30.157960  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:30.159213  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:30.325974  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:30.389310  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:30.657030  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:30.658840  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:30.826125  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:30.889086  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:31.158056  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:31.158589  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:31.326421  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:31.389108  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:31.658291  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:31.659164  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:31.825615  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:31.889792  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:32.157960  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:32.158566  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:32.334360  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:32.365778  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:32.389722  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:32.658119  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:32.658870  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:32.825788  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:32.889572  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:33.157115  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:33.158437  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:33.325842  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:33.389632  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:33.658406  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:33.659064  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:33.826108  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:33.889058  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:34.158243  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:34.158622  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:34.326077  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:34.366821  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:34.389279  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:34.658319  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:34.658686  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:34.826884  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:34.889383  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:35.157005  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:35.158481  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:35.326212  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:35.389106  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:35.661267  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:35.662734  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:35.826079  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:35.889207  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:36.158605  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:36.159095  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:36.326410  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:36.389659  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:36.657240  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:36.658951  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:36.826250  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:36.865609  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:36.890251  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:37.157253  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:37.158445  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:37.325955  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:37.389864  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:37.656787  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:37.658501  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:37.825881  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:37.889595  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:38.157852  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:38.158360  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:38.325485  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:38.390405  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:38.659030  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:38.659234  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:38.826108  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:38.889164  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:39.159900  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:39.160096  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:39.326308  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:39.365831  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:39.389632  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:39.657714  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:39.658532  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:39.826412  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:39.889230  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:40.157368  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:40.159435  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:40.326032  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:40.390986  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:40.658118  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:40.658817  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:40.825756  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:40.889648  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:41.157103  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:41.158506  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:41.325595  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:41.397164  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:41.658563  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:41.660803  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:41.826471  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:41.867137  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:41.889331  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:42.158593  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:42.159486  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:42.326450  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:42.389857  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:42.657996  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:42.658276  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:42.826724  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:42.889416  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:43.159250  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:43.159542  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:43.326368  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:43.389516  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:43.658652  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:43.660157  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:43.826044  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:43.889333  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:44.156782  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:44.158681  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:44.325882  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:44.365809  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:44.389137  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:44.658168  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:44.659024  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:44.826296  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:44.889266  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:45.157854  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:45.159159  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:45.327494  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:45.389571  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:45.657298  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:45.659062  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:45.825381  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:45.889857  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:46.158381  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:46.159065  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:46.326317  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:46.389556  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:46.657160  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:46.658697  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:46.827028  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:46.865472  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:46.889955  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:47.157309  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:47.158984  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:47.326431  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:47.389061  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:47.657899  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:47.658809  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:47.825555  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:47.889599  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:48.157321  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:48.159334  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:48.325764  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:48.389552  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:48.657824  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:48.660644  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:48.825761  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:48.889833  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:49.158422  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:49.158957  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:49.326524  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:49.365778  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:49.388999  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:49.658048  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:49.658795  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:49.826130  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:49.889643  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:50.158377  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:50.158766  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:50.325608  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:50.389252  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:50.658580  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:50.659114  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:50.826568  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:50.889105  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:51.158432  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:51.159061  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:51.325709  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:51.365958  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:51.389811  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:51.658990  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:51.659468  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:51.826195  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:51.890040  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:52.157706  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:52.158234  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:52.326760  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:52.390172  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:52.658729  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:52.659202  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:52.826622  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:52.890107  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:53.157821  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:53.159470  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:53.325480  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:53.389514  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:53.658885  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:53.660235  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:53.826018  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:53.865637  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:53.889561  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:54.158736  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:54.159217  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:54.326121  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:54.409929  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:54.657997  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:54.659619  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:54.825997  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:54.889828  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:55.157584  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:55.158116  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:55.326381  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:55.391467  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:55.658294  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:55.658880  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:55.825545  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:55.865730  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:55.888895  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:56.194025  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:56.198349  533196 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0722 00:28:56.198422  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:56.329745  533196 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0722 00:28:56.329823  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:56.374938  533196 node_ready.go:49] node "addons-783853" has status "Ready":"True"
	I0722 00:28:56.375004  533196 node_ready.go:38] duration metric: took 43.512936622s for node "addons-783853" to be "Ready" ...
	I0722 00:28:56.375028  533196 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:28:56.391416  533196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mkbx" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:56.407114  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:56.668418  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:56.670416  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:56.831514  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:56.899665  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:57.159334  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:57.160245  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:57.327071  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:57.390186  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:57.657415  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:57.659445  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:57.827069  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:57.889063  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:58.159109  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:58.159974  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:58.327789  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:58.390088  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:58.397262  533196 pod_ready.go:102] pod "coredns-7db6d8ff4d-7mkbx" in "kube-system" namespace has status "Ready":"False"
	I0722 00:28:58.690246  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:58.718400  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:58.845163  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:58.916618  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:58.923204  533196 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mkbx" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:58.923229  533196 pod_ready.go:81] duration metric: took 2.53173046s for pod "coredns-7db6d8ff4d-7mkbx" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.923255  533196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.938134  533196 pod_ready.go:92] pod "etcd-addons-783853" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:58.938162  533196 pod_ready.go:81] duration metric: took 14.898923ms for pod "etcd-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.938177  533196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.957006  533196 pod_ready.go:92] pod "kube-apiserver-addons-783853" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:58.957031  533196 pod_ready.go:81] duration metric: took 18.846838ms for pod "kube-apiserver-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.957043  533196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.973957  533196 pod_ready.go:92] pod "kube-controller-manager-addons-783853" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:58.973986  533196 pod_ready.go:81] duration metric: took 16.933615ms for pod "kube-controller-manager-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.974000  533196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v7srs" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.990127  533196 pod_ready.go:92] pod "kube-proxy-v7srs" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:58.990152  533196 pod_ready.go:81] duration metric: took 16.135895ms for pod "kube-proxy-v7srs" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.990164  533196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:59.158130  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:59.166060  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:59.295137  533196 pod_ready.go:92] pod "kube-scheduler-addons-783853" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:59.295162  533196 pod_ready.go:81] duration metric: took 304.989623ms for pod "kube-scheduler-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:59.295174  533196 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:59.327714  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:59.391399  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:59.660654  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:59.661993  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:59.830057  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:59.890247  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:00.165211  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:00.172807  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:00.330067  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:00.391089  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:00.661848  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:00.663586  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:00.829393  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:00.890611  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:01.158981  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:01.159513  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:01.302485  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:01.327001  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:01.390214  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:01.658480  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:01.659841  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:01.827042  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:01.889454  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:02.159103  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:02.159512  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:02.329427  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:02.391313  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:02.661216  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:02.664148  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:02.828601  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:02.889987  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:03.163054  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:03.164546  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:03.303686  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:03.327944  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:03.390440  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:03.659636  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:03.660554  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:03.827574  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:03.889698  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:04.158687  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:04.159536  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:04.327288  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:04.389480  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:04.660906  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:04.662102  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:04.827314  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:04.889333  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:05.159770  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:05.180905  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:05.327118  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:05.390608  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:05.660303  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:05.660939  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:05.801702  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:05.827210  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:05.893977  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:06.174501  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:06.178812  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:06.338461  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:06.389967  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:06.660449  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:06.661168  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:06.827706  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:06.889111  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:07.158784  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:07.159352  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:07.327609  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:07.389775  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:07.659474  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:07.660487  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:07.803119  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:07.833814  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:07.889470  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:08.163631  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:08.173205  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:08.330739  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:08.390567  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:08.659521  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:08.660282  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:08.827258  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:08.889331  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:09.164623  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:09.165526  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:09.327283  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:09.394906  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:09.658113  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:09.659780  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:09.828885  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:09.889507  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:10.158273  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:10.160286  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:10.301547  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:10.326830  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:10.389263  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:10.662288  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:10.665009  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:10.829296  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:10.890208  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:11.159462  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:11.162164  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:11.328321  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:11.391691  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:11.659850  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:11.660763  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:11.827240  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:11.889531  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:12.158672  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:12.159900  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:12.301998  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:12.326961  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:12.389399  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:12.660509  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:12.661416  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:12.827205  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:12.889924  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:13.173155  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:13.193335  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:13.327794  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:13.390529  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:13.660922  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:13.663973  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:13.828973  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:13.890772  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:14.162545  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:14.164215  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:14.302502  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:14.328440  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:14.390939  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:14.667483  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:14.669071  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:14.833143  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:14.890164  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:15.161909  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:15.163440  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:15.329420  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:15.391547  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:15.660143  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:15.661685  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:15.838004  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:15.891445  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:16.159529  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:16.160957  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:16.305437  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:16.331867  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:16.389750  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:16.660365  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:16.661325  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:16.827486  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:16.889839  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:17.157944  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:17.160742  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:17.327144  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:17.389636  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:17.661365  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:17.669896  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:17.829141  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:17.890481  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:18.163111  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:18.164786  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:18.330162  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:18.392219  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:18.658420  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:18.660910  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:18.807461  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:18.833920  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:18.891244  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:19.163042  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:19.164030  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:19.329604  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:19.390097  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:19.658162  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:19.661276  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:19.829139  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:19.891756  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:20.178377  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:20.180415  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:20.329673  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:20.391270  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:20.668918  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:20.678391  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:20.828151  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:20.889929  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:21.164633  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:21.166050  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:21.302300  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:21.341311  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:21.389871  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:21.660591  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:21.662128  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:21.831065  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:21.893095  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:22.164298  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:22.165726  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:22.327068  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:22.389237  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:22.657466  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:22.658104  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:22.828944  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:22.889277  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:23.157438  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:23.157693  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:23.303518  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:23.327236  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:23.389480  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:23.658790  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:23.659411  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:23.827652  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:23.889955  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:24.159284  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:24.160378  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:24.347033  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:24.394010  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:24.667404  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:24.670265  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:24.827918  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:24.892717  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:25.166505  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:25.168906  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:25.339167  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:25.389867  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:25.674039  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:25.675858  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:25.802577  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:25.834281  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:25.890268  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:26.175438  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:26.177119  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:26.327445  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:26.396286  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:26.660805  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:26.664302  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:26.833000  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:26.891342  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:27.159275  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:27.161068  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:27.330738  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:27.394360  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:27.658743  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:27.659827  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:27.811499  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:27.829979  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:27.890592  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:28.166111  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:28.167462  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:28.329829  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:28.389859  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:28.669432  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:28.669871  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:28.826763  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:28.889762  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:29.158599  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:29.159761  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:29.327385  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:29.392550  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:29.661779  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:29.664406  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:29.830896  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:29.889561  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:30.159065  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:30.159678  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:30.301221  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:30.326953  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:30.389466  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:30.658761  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:30.659553  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:30.827228  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:30.889554  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:31.158819  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:31.158945  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:31.327879  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:31.389527  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:31.658358  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:31.659850  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:31.828073  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:31.889501  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:32.158008  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:32.159000  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:32.302219  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:32.329050  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:32.390041  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:32.683421  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:32.692725  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:32.830390  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:32.891340  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:33.165193  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:33.167126  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:33.328104  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:33.397302  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:33.662154  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:33.665068  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:33.828203  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:33.890349  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:34.160003  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:34.168209  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:34.304158  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:34.330055  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:34.389903  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:34.660072  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:34.662082  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:34.829685  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:34.892148  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:35.161886  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:35.163033  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:35.328828  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:35.390255  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:35.660141  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:35.664747  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:35.828326  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:35.889628  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:36.159578  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:36.162285  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:36.334489  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:36.390490  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:36.659837  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:36.661594  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:36.803183  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:36.828797  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:36.897896  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:37.158014  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:37.162784  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:37.339416  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:37.392139  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:37.658373  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:37.659416  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:37.829633  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:37.890055  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:38.158951  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:38.161248  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:38.327514  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:38.394894  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:38.659340  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:38.660579  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:38.827093  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:38.889133  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:39.169218  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:39.182700  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:39.302316  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:39.328189  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:39.389659  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:39.668536  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:39.670955  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:39.831502  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:39.892571  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:40.158109  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:40.159345  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:40.327194  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:40.389357  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:40.658650  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:40.659449  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:40.826904  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:40.889503  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:41.159218  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:41.160408  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:41.328140  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:41.389738  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:41.660594  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:41.661904  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:41.803028  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:41.827831  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:41.890940  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:42.161338  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:42.165688  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:42.328248  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:42.394345  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:42.664576  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:42.665998  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:42.832369  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:42.892492  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:43.159415  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:43.160752  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:43.328296  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:43.389978  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:43.661569  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:43.664065  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:43.833982  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:43.889492  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:44.161021  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:44.162279  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:44.302969  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:44.327934  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:44.390329  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:44.657778  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:44.660791  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:44.828114  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:44.890347  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:45.163851  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:45.166242  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:45.327810  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:45.390407  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:45.660965  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:45.661762  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:45.827818  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:45.889272  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:46.161543  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:46.161810  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:46.327633  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:46.390066  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:46.659087  533196 kapi.go:107] duration metric: took 1m29.506025896s to wait for kubernetes.io/minikube-addons=registry ...
	I0722 00:29:46.662320  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:46.801937  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:46.843945  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:46.889538  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:47.159663  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:47.327679  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:47.390236  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:47.659249  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:47.827970  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:47.889430  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:48.159168  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:48.327903  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:48.389603  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:48.659714  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:48.802836  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:48.828344  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:48.891214  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:49.159747  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:49.327859  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:49.389074  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:49.658956  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:49.830577  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:49.889810  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:50.159151  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:50.327138  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:50.389805  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:50.659045  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:50.806814  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:50.827683  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:50.890076  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:51.159039  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:51.330167  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:51.389985  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:51.659310  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:51.827164  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:51.889202  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:52.160051  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:52.329042  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:52.389411  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:52.658425  533196 kapi.go:107] duration metric: took 1m35.504456713s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0722 00:29:52.827458  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:52.889761  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:53.310911  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:53.327670  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:53.390234  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:53.829447  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:53.889922  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:54.326797  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:54.389060  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:54.828538  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:54.889457  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:55.327478  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:55.400217  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:55.802075  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:55.829543  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:55.891882  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:56.328710  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:56.389050  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:56.848067  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:56.890427  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:57.371834  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:57.391055  533196 kapi.go:107] duration metric: took 1m37.505291183s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0722 00:29:57.393062  533196 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-783853 cluster.
	I0722 00:29:57.394680  533196 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0722 00:29:57.396813  533196 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0722 00:29:57.827915  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:58.303991  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:58.329070  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:58.834746  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:59.327515  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:59.826883  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:00.307740  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:00.333622  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:00.831931  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:01.327523  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:01.829328  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:02.327979  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:02.802669  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:02.828783  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:03.328033  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:03.827261  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:04.327127  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:04.827408  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:05.301405  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:05.328051  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:05.828483  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:06.327479  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:06.827579  533196 kapi.go:107] duration metric: took 1m49.005991601s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0722 00:30:06.829636  533196 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0722 00:30:06.831151  533196 addons.go:510] duration metric: took 1m56.886153118s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0722 00:30:07.302665  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:09.802165  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:11.802403  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:14.302821  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:16.803394  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:19.301701  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:21.302056  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:23.302436  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:25.801224  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:28.301001  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:30.301543  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:32.301857  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:34.300942  533196 pod_ready.go:92] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"True"
	I0722 00:30:34.300967  533196 pod_ready.go:81] duration metric: took 1m35.005785877s for pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace to be "Ready" ...
	I0722 00:30:34.300979  533196 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jwvh7" in "kube-system" namespace to be "Ready" ...
	I0722 00:30:34.306345  533196 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jwvh7" in "kube-system" namespace has status "Ready":"True"
	I0722 00:30:34.306371  533196 pod_ready.go:81] duration metric: took 5.38443ms for pod "nvidia-device-plugin-daemonset-jwvh7" in "kube-system" namespace to be "Ready" ...
	I0722 00:30:34.306392  533196 pod_ready.go:38] duration metric: took 1m37.931338924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:30:34.306410  533196 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:30:34.306454  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:30:34.306521  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:30:34.358573  533196 cri.go:89] found id: "f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:34.358595  533196 cri.go:89] found id: ""
	I0722 00:30:34.358607  533196 logs.go:276] 1 containers: [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c]
	I0722 00:30:34.358663  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.363032  533196 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:30:34.363106  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:30:34.404043  533196 cri.go:89] found id: "c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:34.404064  533196 cri.go:89] found id: ""
	I0722 00:30:34.404072  533196 logs.go:276] 1 containers: [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe]
	I0722 00:30:34.404144  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.407487  533196 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:30:34.407587  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:30:34.451962  533196 cri.go:89] found id: "c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:34.452035  533196 cri.go:89] found id: ""
	I0722 00:30:34.452058  533196 logs.go:276] 1 containers: [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3]
	I0722 00:30:34.452146  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.455497  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:30:34.455578  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:30:34.502021  533196 cri.go:89] found id: "c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:34.502042  533196 cri.go:89] found id: ""
	I0722 00:30:34.502050  533196 logs.go:276] 1 containers: [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28]
	I0722 00:30:34.502112  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.505434  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:30:34.505506  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:30:34.545862  533196 cri.go:89] found id: "7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:34.545886  533196 cri.go:89] found id: ""
	I0722 00:30:34.545894  533196 logs.go:276] 1 containers: [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d]
	I0722 00:30:34.545966  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.549469  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:30:34.549552  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:30:34.597537  533196 cri.go:89] found id: "a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:34.597568  533196 cri.go:89] found id: ""
	I0722 00:30:34.597577  533196 logs.go:276] 1 containers: [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab]
	I0722 00:30:34.597636  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.602154  533196 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:30:34.602225  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:30:34.642444  533196 cri.go:89] found id: "f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:34.642467  533196 cri.go:89] found id: ""
	I0722 00:30:34.642480  533196 logs.go:276] 1 containers: [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b]
	I0722 00:30:34.642555  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.646188  533196 logs.go:123] Gathering logs for kube-apiserver [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c] ...
	I0722 00:30:34.646211  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:34.710059  533196 logs.go:123] Gathering logs for etcd [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe] ...
	I0722 00:30:34.710103  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:34.761805  533196 logs.go:123] Gathering logs for coredns [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3] ...
	I0722 00:30:34.761836  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:34.803019  533196 logs.go:123] Gathering logs for kube-scheduler [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28] ...
	I0722 00:30:34.803048  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:34.857514  533196 logs.go:123] Gathering logs for kube-controller-manager [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab] ...
	I0722 00:30:34.857545  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:34.923135  533196 logs.go:123] Gathering logs for kindnet [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b] ...
	I0722 00:30:34.923167  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:34.991801  533196 logs.go:123] Gathering logs for container status ...
	I0722 00:30:34.991832  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:30:35.054596  533196 logs.go:123] Gathering logs for dmesg ...
	I0722 00:30:35.054627  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:30:35.074620  533196 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:30:35.074647  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:30:35.242715  533196 logs.go:123] Gathering logs for kube-proxy [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d] ...
	I0722 00:30:35.242747  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:35.287096  533196 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:30:35.287124  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:30:35.377856  533196 logs.go:123] Gathering logs for kubelet ...
	I0722 00:30:35.377890  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 00:30:35.424993  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.164143    1534 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.425232  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.426332  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.426523  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.426711  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.426919  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:35.459791  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:35.459821  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 00:30:35.459873  533196 out.go:239] X Problems detected in kubelet:
	W0722 00:30:35.459881  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.459889  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.459901  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.459907  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.459916  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:35.459922  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:35.459927  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:30:45.461013  533196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:30:45.474972  533196 api_server.go:72] duration metric: took 2m35.530339378s to wait for apiserver process to appear ...
	I0722 00:30:45.475005  533196 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:30:45.475042  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:30:45.475106  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:30:45.513719  533196 cri.go:89] found id: "f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:45.513742  533196 cri.go:89] found id: ""
	I0722 00:30:45.513750  533196 logs.go:276] 1 containers: [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c]
	I0722 00:30:45.513808  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.517159  533196 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:30:45.517228  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:30:45.555750  533196 cri.go:89] found id: "c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:45.555769  533196 cri.go:89] found id: ""
	I0722 00:30:45.555777  533196 logs.go:276] 1 containers: [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe]
	I0722 00:30:45.555837  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.559364  533196 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:30:45.559433  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:30:45.625418  533196 cri.go:89] found id: "c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:45.625438  533196 cri.go:89] found id: ""
	I0722 00:30:45.625446  533196 logs.go:276] 1 containers: [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3]
	I0722 00:30:45.625499  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.628937  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:30:45.629056  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:30:45.670843  533196 cri.go:89] found id: "c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:45.670865  533196 cri.go:89] found id: ""
	I0722 00:30:45.670874  533196 logs.go:276] 1 containers: [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28]
	I0722 00:30:45.670927  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.674516  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:30:45.674594  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:30:45.714156  533196 cri.go:89] found id: "7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:45.714177  533196 cri.go:89] found id: ""
	I0722 00:30:45.714185  533196 logs.go:276] 1 containers: [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d]
	I0722 00:30:45.714239  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.717704  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:30:45.717777  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:30:45.758274  533196 cri.go:89] found id: "a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:45.758345  533196 cri.go:89] found id: ""
	I0722 00:30:45.758361  533196 logs.go:276] 1 containers: [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab]
	I0722 00:30:45.758426  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.761739  533196 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:30:45.761808  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:30:45.806368  533196 cri.go:89] found id: "f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:45.806390  533196 cri.go:89] found id: ""
	I0722 00:30:45.806399  533196 logs.go:276] 1 containers: [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b]
	I0722 00:30:45.806457  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.809877  533196 logs.go:123] Gathering logs for kube-controller-manager [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab] ...
	I0722 00:30:45.809897  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:45.881045  533196 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:30:45.881089  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:30:45.983965  533196 logs.go:123] Gathering logs for container status ...
	I0722 00:30:45.983999  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:30:46.063341  533196 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:30:46.063373  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:30:46.198117  533196 logs.go:123] Gathering logs for kube-apiserver [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c] ...
	I0722 00:30:46.198173  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:46.254454  533196 logs.go:123] Gathering logs for coredns [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3] ...
	I0722 00:30:46.254489  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:46.308015  533196 logs.go:123] Gathering logs for kube-scheduler [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28] ...
	I0722 00:30:46.308042  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:46.352117  533196 logs.go:123] Gathering logs for kube-proxy [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d] ...
	I0722 00:30:46.352148  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:46.392160  533196 logs.go:123] Gathering logs for kindnet [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b] ...
	I0722 00:30:46.392191  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:46.439139  533196 logs.go:123] Gathering logs for kubelet ...
	I0722 00:30:46.439173  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 00:30:46.473391  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.164143    1534 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.473636  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.474980  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.475176  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.475364  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.475583  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:46.518202  533196 logs.go:123] Gathering logs for dmesg ...
	I0722 00:30:46.518235  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:30:46.537984  533196 logs.go:123] Gathering logs for etcd [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe] ...
	I0722 00:30:46.538016  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:46.615715  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:46.615744  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 00:30:46.615809  533196 out.go:239] X Problems detected in kubelet:
	W0722 00:30:46.615822  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.615838  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.615846  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.615857  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.615864  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:46.615870  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:46.615880  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:30:56.618118  533196 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0722 00:30:56.626328  533196 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0722 00:30:56.628076  533196 api_server.go:141] control plane version: v1.30.3
	I0722 00:30:56.628103  533196 api_server.go:131] duration metric: took 11.153089459s to wait for apiserver health ...
	I0722 00:30:56.628111  533196 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:30:56.628134  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:30:56.628200  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:30:56.667151  533196 cri.go:89] found id: "f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:56.667170  533196 cri.go:89] found id: ""
	I0722 00:30:56.667179  533196 logs.go:276] 1 containers: [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c]
	I0722 00:30:56.667241  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.671089  533196 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:30:56.671165  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:30:56.710114  533196 cri.go:89] found id: "c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:56.710137  533196 cri.go:89] found id: ""
	I0722 00:30:56.710145  533196 logs.go:276] 1 containers: [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe]
	I0722 00:30:56.710201  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.713645  533196 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:30:56.713719  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:30:56.756273  533196 cri.go:89] found id: "c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:56.756297  533196 cri.go:89] found id: ""
	I0722 00:30:56.756305  533196 logs.go:276] 1 containers: [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3]
	I0722 00:30:56.756383  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.760079  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:30:56.760174  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:30:56.798045  533196 cri.go:89] found id: "c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:56.798068  533196 cri.go:89] found id: ""
	I0722 00:30:56.798076  533196 logs.go:276] 1 containers: [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28]
	I0722 00:30:56.798146  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.804204  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:30:56.804278  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:30:56.849168  533196 cri.go:89] found id: "7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:56.849192  533196 cri.go:89] found id: ""
	I0722 00:30:56.849200  533196 logs.go:276] 1 containers: [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d]
	I0722 00:30:56.849256  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.852970  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:30:56.853040  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:30:56.891066  533196 cri.go:89] found id: "a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:56.891127  533196 cri.go:89] found id: ""
	I0722 00:30:56.891149  533196 logs.go:276] 1 containers: [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab]
	I0722 00:30:56.891227  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.894626  533196 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:30:56.894697  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:30:56.933676  533196 cri.go:89] found id: "f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:56.933698  533196 cri.go:89] found id: ""
	I0722 00:30:56.933706  533196 logs.go:276] 1 containers: [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b]
	I0722 00:30:56.933759  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.936976  533196 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:30:56.937001  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:30:57.096441  533196 logs.go:123] Gathering logs for kube-apiserver [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c] ...
	I0722 00:30:57.096472  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:57.199371  533196 logs.go:123] Gathering logs for kube-scheduler [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28] ...
	I0722 00:30:57.199407  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:57.247914  533196 logs.go:123] Gathering logs for kube-proxy [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d] ...
	I0722 00:30:57.247947  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:57.290832  533196 logs.go:123] Gathering logs for kube-controller-manager [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab] ...
	I0722 00:30:57.290859  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:57.359616  533196 logs.go:123] Gathering logs for kindnet [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b] ...
	I0722 00:30:57.359652  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:57.415081  533196 logs.go:123] Gathering logs for kubelet ...
	I0722 00:30:57.415111  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 00:30:57.455262  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.164143    1534 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.455499  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.456582  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.456775  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.456963  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.457171  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:57.500022  533196 logs.go:123] Gathering logs for dmesg ...
	I0722 00:30:57.500048  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:30:57.519184  533196 logs.go:123] Gathering logs for etcd [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe] ...
	I0722 00:30:57.519213  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:57.567002  533196 logs.go:123] Gathering logs for coredns [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3] ...
	I0722 00:30:57.567035  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:57.607684  533196 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:30:57.607713  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:30:57.710828  533196 logs.go:123] Gathering logs for container status ...
	I0722 00:30:57.710871  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:30:57.766065  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:57.766100  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 00:30:57.766162  533196 out.go:239] X Problems detected in kubelet:
	W0722 00:30:57.766174  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.766191  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.766202  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.766217  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.766225  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:57.766234  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:57.766239  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:31:07.778676  533196 system_pods.go:59] 18 kube-system pods found
	I0722 00:31:07.778717  533196 system_pods.go:61] "coredns-7db6d8ff4d-7mkbx" [23f5be8a-5c87-4784-b863-324b9a79fccf] Running
	I0722 00:31:07.778724  533196 system_pods.go:61] "csi-hostpath-attacher-0" [87434606-a156-4af9-89c7-87f1b925aa18] Running
	I0722 00:31:07.778728  533196 system_pods.go:61] "csi-hostpath-resizer-0" [06dcad40-139c-4450-85ac-0c181a0c4ba8] Running
	I0722 00:31:07.778733  533196 system_pods.go:61] "csi-hostpathplugin-kn5st" [2bb4c17d-23bf-4aa7-a4c5-c61ccc25cd62] Running
	I0722 00:31:07.778772  533196 system_pods.go:61] "etcd-addons-783853" [db67dcf2-4601-498e-ab87-4d6b347e968a] Running
	I0722 00:31:07.778777  533196 system_pods.go:61] "kindnet-cdpvw" [5c2685c2-cf4b-4dc1-b2ce-407adb3e4b65] Running
	I0722 00:31:07.778784  533196 system_pods.go:61] "kube-apiserver-addons-783853" [8ec42fcb-ab7b-4b3b-a6c1-ee832cb2d96c] Running
	I0722 00:31:07.778788  533196 system_pods.go:61] "kube-controller-manager-addons-783853" [43234280-4e30-42a9-a39f-3ecf1ab25a34] Running
	I0722 00:31:07.778803  533196 system_pods.go:61] "kube-ingress-dns-minikube" [4f67e797-baba-4022-b2e9-f969cb82f4fb] Running
	I0722 00:31:07.778807  533196 system_pods.go:61] "kube-proxy-v7srs" [504b64d6-49a4-472b-9ede-45723f69fab1] Running
	I0722 00:31:07.778812  533196 system_pods.go:61] "kube-scheduler-addons-783853" [dc81eee1-f262-4ca8-8856-f56e30661a00] Running
	I0722 00:31:07.778818  533196 system_pods.go:61] "metrics-server-c59844bb4-znqdq" [3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362] Running
	I0722 00:31:07.778831  533196 system_pods.go:61] "nvidia-device-plugin-daemonset-jwvh7" [03f22a4c-c638-40a2-8a03-0b0770a62063] Running
	I0722 00:31:07.778836  533196 system_pods.go:61] "registry-656c9c8d9c-m9wqh" [d562888d-bd3c-4b3f-9adc-aea340501248] Running
	I0722 00:31:07.778840  533196 system_pods.go:61] "registry-proxy-qs2hs" [c0bbfdb6-7c30-4635-b7e5-b3509185506d] Running
	I0722 00:31:07.778844  533196 system_pods.go:61] "snapshot-controller-745499f584-9cqss" [118575a6-1b12-4aa7-bc7d-83e150ed8d0a] Running
	I0722 00:31:07.778847  533196 system_pods.go:61] "snapshot-controller-745499f584-b6v2r" [cdd8b6ed-74e6-4df0-84eb-3c0a7fd51c86] Running
	I0722 00:31:07.778852  533196 system_pods.go:61] "storage-provisioner" [13a8c1f3-5cee-4d0a-bd3a-3611f982b615] Running
	I0722 00:31:07.778860  533196 system_pods.go:74] duration metric: took 11.150742375s to wait for pod list to return data ...
	I0722 00:31:07.778872  533196 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:31:07.781242  533196 default_sa.go:45] found service account: "default"
	I0722 00:31:07.781269  533196 default_sa.go:55] duration metric: took 2.389438ms for default service account to be created ...
	I0722 00:31:07.781279  533196 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:31:07.791027  533196 system_pods.go:86] 18 kube-system pods found
	I0722 00:31:07.791065  533196 system_pods.go:89] "coredns-7db6d8ff4d-7mkbx" [23f5be8a-5c87-4784-b863-324b9a79fccf] Running
	I0722 00:31:07.791073  533196 system_pods.go:89] "csi-hostpath-attacher-0" [87434606-a156-4af9-89c7-87f1b925aa18] Running
	I0722 00:31:07.791078  533196 system_pods.go:89] "csi-hostpath-resizer-0" [06dcad40-139c-4450-85ac-0c181a0c4ba8] Running
	I0722 00:31:07.791082  533196 system_pods.go:89] "csi-hostpathplugin-kn5st" [2bb4c17d-23bf-4aa7-a4c5-c61ccc25cd62] Running
	I0722 00:31:07.791087  533196 system_pods.go:89] "etcd-addons-783853" [db67dcf2-4601-498e-ab87-4d6b347e968a] Running
	I0722 00:31:07.791093  533196 system_pods.go:89] "kindnet-cdpvw" [5c2685c2-cf4b-4dc1-b2ce-407adb3e4b65] Running
	I0722 00:31:07.791097  533196 system_pods.go:89] "kube-apiserver-addons-783853" [8ec42fcb-ab7b-4b3b-a6c1-ee832cb2d96c] Running
	I0722 00:31:07.791102  533196 system_pods.go:89] "kube-controller-manager-addons-783853" [43234280-4e30-42a9-a39f-3ecf1ab25a34] Running
	I0722 00:31:07.791107  533196 system_pods.go:89] "kube-ingress-dns-minikube" [4f67e797-baba-4022-b2e9-f969cb82f4fb] Running
	I0722 00:31:07.791112  533196 system_pods.go:89] "kube-proxy-v7srs" [504b64d6-49a4-472b-9ede-45723f69fab1] Running
	I0722 00:31:07.791116  533196 system_pods.go:89] "kube-scheduler-addons-783853" [dc81eee1-f262-4ca8-8856-f56e30661a00] Running
	I0722 00:31:07.791123  533196 system_pods.go:89] "metrics-server-c59844bb4-znqdq" [3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362] Running
	I0722 00:31:07.791128  533196 system_pods.go:89] "nvidia-device-plugin-daemonset-jwvh7" [03f22a4c-c638-40a2-8a03-0b0770a62063] Running
	I0722 00:31:07.791135  533196 system_pods.go:89] "registry-656c9c8d9c-m9wqh" [d562888d-bd3c-4b3f-9adc-aea340501248] Running
	I0722 00:31:07.791139  533196 system_pods.go:89] "registry-proxy-qs2hs" [c0bbfdb6-7c30-4635-b7e5-b3509185506d] Running
	I0722 00:31:07.791143  533196 system_pods.go:89] "snapshot-controller-745499f584-9cqss" [118575a6-1b12-4aa7-bc7d-83e150ed8d0a] Running
	I0722 00:31:07.791148  533196 system_pods.go:89] "snapshot-controller-745499f584-b6v2r" [cdd8b6ed-74e6-4df0-84eb-3c0a7fd51c86] Running
	I0722 00:31:07.791155  533196 system_pods.go:89] "storage-provisioner" [13a8c1f3-5cee-4d0a-bd3a-3611f982b615] Running
	I0722 00:31:07.791162  533196 system_pods.go:126] duration metric: took 9.876648ms to wait for k8s-apps to be running ...
	I0722 00:31:07.791172  533196 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:31:07.791232  533196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:31:07.803788  533196 system_svc.go:56] duration metric: took 12.605404ms WaitForService to wait for kubelet
	I0722 00:31:07.803816  533196 kubeadm.go:582] duration metric: took 2m57.859188435s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:31:07.803838  533196 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:31:07.807620  533196 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0722 00:31:07.807656  533196 node_conditions.go:123] node cpu capacity is 2
	I0722 00:31:07.807669  533196 node_conditions.go:105] duration metric: took 3.825805ms to run NodePressure ...
	I0722 00:31:07.807682  533196 start.go:241] waiting for startup goroutines ...
	I0722 00:31:07.807693  533196 start.go:246] waiting for cluster config update ...
	I0722 00:31:07.807712  533196 start.go:255] writing updated cluster config ...
	I0722 00:31:07.807994  533196 ssh_runner.go:195] Run: rm -f paused
	I0722 00:31:08.136617  533196 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:31:08.139489  533196 out.go:177] * Done! kubectl is now configured to use "addons-783853" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.260611583Z" level=info msg="Removing pod sandbox: b9b44f0da3d8be40f8405ec3136d7398c1578a444f1e4d6d769a213988d21922" id=2dda06ca-9f6f-47c1-91ee-a23144d1a61a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.268610570Z" level=info msg="Removed pod sandbox: b9b44f0da3d8be40f8405ec3136d7398c1578a444f1e4d6d769a213988d21922" id=2dda06ca-9f6f-47c1-91ee-a23144d1a61a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.269148054Z" level=info msg="Stopping pod sandbox: 75ddb073ff21835752611b14c20595adddd1a54e6bae39cdc5d793f72eca0b95" id=4685b8a5-48f4-4c61-9254-91b40112a01a name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.269247838Z" level=info msg="Stopped pod sandbox (already stopped): 75ddb073ff21835752611b14c20595adddd1a54e6bae39cdc5d793f72eca0b95" id=4685b8a5-48f4-4c61-9254-91b40112a01a name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.269509798Z" level=info msg="Removing pod sandbox: 75ddb073ff21835752611b14c20595adddd1a54e6bae39cdc5d793f72eca0b95" id=8f83a19d-ef97-4f47-991e-df860a5bdc8c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.277433666Z" level=info msg="Removed pod sandbox: 75ddb073ff21835752611b14c20595adddd1a54e6bae39cdc5d793f72eca0b95" id=8f83a19d-ef97-4f47-991e-df860a5bdc8c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.277995347Z" level=info msg="Stopping pod sandbox: 73c4a285d4f7151cb551aebfbb68487f6486ce4631dbd9844024cf56e24dd370" id=5bda8d8a-4d19-4c84-9d02-d9cb79a3953a name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.278056755Z" level=info msg="Stopped pod sandbox (already stopped): 73c4a285d4f7151cb551aebfbb68487f6486ce4631dbd9844024cf56e24dd370" id=5bda8d8a-4d19-4c84-9d02-d9cb79a3953a name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.278644389Z" level=info msg="Removing pod sandbox: 73c4a285d4f7151cb551aebfbb68487f6486ce4631dbd9844024cf56e24dd370" id=e442ed49-b2ec-470b-af74-061cd25f6ffc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.286625407Z" level=info msg="Removed pod sandbox: 73c4a285d4f7151cb551aebfbb68487f6486ce4631dbd9844024cf56e24dd370" id=e442ed49-b2ec-470b-af74-061cd25f6ffc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.376447377Z" level=info msg="Stopping container: c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661 (timeout: 2s)" id=49ce4a25-0b03-4ef2-a26a-1b041878d9c5 name=/runtime.v1.RuntimeService/StopContainer
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.383270972Z" level=warning msg="Stopping container c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=49ce4a25-0b03-4ef2-a26a-1b041878d9c5 name=/runtime.v1.RuntimeService/StopContainer
	Jul 22 00:35:00 addons-783853 conmon[4677]: conmon c461369c1e9544a7a1ee <ninfo>: container 4688 exited with status 137
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.526734025Z" level=info msg="Stopped container c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661: ingress-nginx/ingress-nginx-controller-6d9bd977d4-g7h89/controller" id=49ce4a25-0b03-4ef2-a26a-1b041878d9c5 name=/runtime.v1.RuntimeService/StopContainer
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.527432586Z" level=info msg="Stopping pod sandbox: 5cfde4c0b882c55e481b5b56cc6e52596ed3f9726559b58ec45b944bb4344f48" id=5708815c-cdb4-4e83-9767-775bf18df93f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.531129446Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-FU3MY5DUL3XMDUCS - [0:0]\n:KUBE-HP-L4MDPUJWJ77FKVLS - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-FU3MY5DUL3XMDUCS\n-X KUBE-HP-L4MDPUJWJ77FKVLS\nCOMMIT\n"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.532693536Z" level=info msg="Closing host port tcp:80"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.532792442Z" level=info msg="Closing host port tcp:443"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.534355087Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.534388064Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.534596707Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-6d9bd977d4-g7h89 Namespace:ingress-nginx ID:5cfde4c0b882c55e481b5b56cc6e52596ed3f9726559b58ec45b944bb4344f48 UID:47b77f41-f681-441d-bf14-c37ac84b670d NetNS:/var/run/netns/ef2ae07f-657d-49fc-85bc-d487b97ea862 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.534876136Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-6d9bd977d4-g7h89 from CNI network \"kindnet\" (type=ptp)"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.562635754Z" level=info msg="Stopped pod sandbox: 5cfde4c0b882c55e481b5b56cc6e52596ed3f9726559b58ec45b944bb4344f48" id=5708815c-cdb4-4e83-9767-775bf18df93f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.615193837Z" level=info msg="Removing container: c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661" id=09b7a131-e742-4a1b-ada9-79c15cb0bc50 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.629450881Z" level=info msg="Removed container c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661: ingress-nginx/ingress-nginx-controller-6d9bd977d4-g7h89/controller" id=09b7a131-e742-4a1b-ada9-79c15cb0bc50 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b9526602d9ad3       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   8 seconds ago       Running             hello-world-app           0                   82db09d763349       hello-world-app-6778b5fc9f-pl4h9
	004ab4d184e30       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         2 minutes ago       Running             nginx                     0                   bdf9029e99990       nginx
	dd563aa5d9bbe       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   3 minutes ago       Running             headlamp                  0                   5ec0375dcb002       headlamp-7867546754-hbj4f
	bb90afc7e859b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            5 minutes ago       Running             gcp-auth                  0                   cfa15e5448e8b       gcp-auth-5db96cd9b4-mh6ws
	edcd40174a088       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   5 minutes ago       Running             metrics-server            0                   499ced9a5bc9b       metrics-server-c59844bb4-znqdq
	d6ca537fc472a       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                         5 minutes ago       Running             yakd                      0                   17d510e84d6d7       yakd-dashboard-799879c74f-7hmg4
	c568a897c0879       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        6 minutes ago       Running             coredns                   0                   601cf99f8fad6       coredns-7db6d8ff4d-7mkbx
	461123cde6927       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        6 minutes ago       Running             storage-provisioner       0                   14df73512b59b       storage-provisioner
	f1d9ff424c7f6       docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a                      6 minutes ago       Running             kindnet-cni               0                   330306786f6bf       kindnet-cdpvw
	7ce7a71ddc6cb       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                        6 minutes ago       Running             kube-proxy                0                   b413ec036bbf9       kube-proxy-v7srs
	c3ad375225f40       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        7 minutes ago       Running             etcd                      0                   f9984e78264dd       etcd-addons-783853
	f0031f14ce88c       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                        7 minutes ago       Running             kube-apiserver            0                   425237d385aa6       kube-apiserver-addons-783853
	c4a0894f7c861       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                        7 minutes ago       Running             kube-scheduler            0                   61654809120f4       kube-scheduler-addons-783853
	a3d74c472e3cd       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                        7 minutes ago       Running             kube-controller-manager   0                   39daf1ed56afa       kube-controller-manager-addons-783853
	
	
	==> coredns [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3] <==
	[INFO] 10.244.0.18:53704 - 64062 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003233617s
	[INFO] 10.244.0.18:56435 - 64781 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000939852s
	[INFO] 10.244.0.18:56435 - 41472 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0009042s
	[INFO] 10.244.0.18:56276 - 62664 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000117359s
	[INFO] 10.244.0.18:56276 - 3319 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000048435s
	[INFO] 10.244.0.18:35844 - 22610 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056337s
	[INFO] 10.244.0.18:35844 - 47696 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034552s
	[INFO] 10.244.0.18:59693 - 7615 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055115s
	[INFO] 10.244.0.18:59693 - 41146 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000032961s
	[INFO] 10.244.0.18:45044 - 20384 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00164986s
	[INFO] 10.244.0.18:45044 - 50339 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001574872s
	[INFO] 10.244.0.18:51242 - 39676 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007292s
	[INFO] 10.244.0.18:51242 - 22015 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126131s
	[INFO] 10.244.0.20:36704 - 32684 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147317s
	[INFO] 10.244.0.20:42870 - 38273 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000072058s
	[INFO] 10.244.0.20:41244 - 61005 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084481s
	[INFO] 10.244.0.20:58722 - 8117 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000066881s
	[INFO] 10.244.0.20:46881 - 24427 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077621s
	[INFO] 10.244.0.20:58943 - 51881 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00006094s
	[INFO] 10.244.0.20:42125 - 37719 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002727657s
	[INFO] 10.244.0.20:44897 - 7940 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002444257s
	[INFO] 10.244.0.20:36993 - 13633 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000844146s
	[INFO] 10.244.0.20:39896 - 15207 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000778832s
	[INFO] 10.244.0.22:45649 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000201463s
	[INFO] 10.244.0.22:41238 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012773s
	
	
	==> describe nodes <==
	Name:               addons-783853
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-783853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=addons-783853
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_27_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-783853
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:27:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-783853
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:35:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:33:01 +0000   Mon, 22 Jul 2024 00:27:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:33:01 +0000   Mon, 22 Jul 2024 00:27:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:33:01 +0000   Mon, 22 Jul 2024 00:27:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:33:01 +0000   Mon, 22 Jul 2024 00:28:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-783853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f136131046143669c7ae750f1c3a238
	  System UUID:                a87d4cf7-5057-4542-ac60-0e7b432e998b
	  Boot ID:                    7a479143-663f-4f08-926c-92bb931337b4
	  Kernel Version:             5.15.0-1064-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-pl4h9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-5db96cd9b4-mh6ws                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  headlamp                    headlamp-7867546754-hbj4f                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 coredns-7db6d8ff4d-7mkbx                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m56s
	  kube-system                 etcd-addons-783853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m9s
	  kube-system                 kindnet-cdpvw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m56s
	  kube-system                 kube-apiserver-addons-783853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 kube-controller-manager-addons-783853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-proxy-v7srs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m56s
	  kube-system                 kube-scheduler-addons-783853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 metrics-server-c59844bb4-znqdq           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m50s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m50s
	  yakd-dashboard              yakd-dashboard-799879c74f-7hmg4          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     6m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m50s  kube-proxy       
	  Normal  Starting                 7m9s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m9s   kubelet          Node addons-783853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m9s   kubelet          Node addons-783853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m9s   kubelet          Node addons-783853 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m57s  node-controller  Node addons-783853 event: Registered Node addons-783853 in Controller
	  Normal  NodeReady                6m9s   kubelet          Node addons-783853 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000788] FS-Cache: N-cookie c=0000012c [p=00000123 fl=2 nc=0 na=1]
	[  +0.000974] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=00000000f512889c
	[  +0.001085] FS-Cache: N-key=[8] 'e17a3b0000000000'
	[  +0.002775] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000126 [p=00000123 fl=226 nc=0 na=1]
	[  +0.001001] FS-Cache: O-cookie d=00000000656be40d{9p.inode} n=00000000325d45d1
	[  +0.001152] FS-Cache: O-key=[8] 'e17a3b0000000000'
	[  +0.000783] FS-Cache: N-cookie c=0000012d [p=00000123 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=00000000d6f0730b
	[  +0.001094] FS-Cache: N-key=[8] 'e17a3b0000000000'
	[  +2.326910] FS-Cache: Duplicate cookie detected
	[  +0.000819] FS-Cache: O-cookie c=00000124 [p=00000123 fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=00000000656be40d{9p.inode} n=000000004360beb1
	[  +0.001189] FS-Cache: O-key=[8] 'e07a3b0000000000'
	[  +0.000792] FS-Cache: N-cookie c=0000012f [p=00000123 fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=00000000b0a9a241
	[  +0.001150] FS-Cache: N-key=[8] 'e07a3b0000000000'
	[  +0.313505] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000129 [p=00000123 fl=226 nc=0 na=1]
	[  +0.000999] FS-Cache: O-cookie d=00000000656be40d{9p.inode} n=00000000305cdfe4
	[  +0.001117] FS-Cache: O-key=[8] 'e67a3b0000000000'
	[  +0.000728] FS-Cache: N-cookie c=00000130 [p=00000123 fl=2 nc=0 na=1]
	[  +0.000967] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=000000002ccbfa05
	[  +0.001083] FS-Cache: N-key=[8] 'e67a3b0000000000'
	[Jul22 00:00] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe] <==
	{"level":"info","ts":"2024-07-22T00:27:50.74877Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:27:50.748849Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:27:50.74475Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:27:50.744867Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:27:50.749202Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:27:50.74928Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:27:50.750785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-22T00:28:11.723125Z","caller":"traceutil/trace.go:171","msg":"trace[16347025] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"115.038155ms","start":"2024-07-22T00:28:11.608071Z","end":"2024-07-22T00:28:11.723109Z","steps":["trace[16347025] 'process raft request'  (duration: 114.924447ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:28:14.043432Z","caller":"traceutil/trace.go:171","msg":"trace[658452004] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"159.858354ms","start":"2024-07-22T00:28:13.883556Z","end":"2024-07-22T00:28:14.043415Z","steps":["trace[658452004] 'process raft request'  (duration: 159.700888ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:28:14.045187Z","caller":"traceutil/trace.go:171","msg":"trace[1874793671] linearizableReadLoop","detail":"{readStateIndex:394; appliedIndex:394; }","duration":"127.978644ms","start":"2024-07-22T00:28:13.917194Z","end":"2024-07-22T00:28:14.045173Z","steps":["trace[1874793671] 'read index received'  (duration: 127.824214ms)","trace[1874793671] 'applied index is now lower than readState.Index'  (duration: 153.339µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T00:28:14.053102Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.237998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-22T00:28:14.053227Z","caller":"traceutil/trace.go:171","msg":"trace[1965193589] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:382; }","duration":"152.381591ms","start":"2024-07-22T00:28:13.900831Z","end":"2024-07-22T00:28:14.053213Z","steps":["trace[1965193589] 'agreement among raft nodes before linearized reading'  (duration: 152.137034ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:28:14.121592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.510252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-22T00:28:14.121739Z","caller":"traceutil/trace.go:171","msg":"trace[1510974284] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:383; }","duration":"194.666684ms","start":"2024-07-22T00:28:13.927058Z","end":"2024-07-22T00:28:14.121725Z","steps":["trace[1510974284] 'agreement among raft nodes before linearized reading'  (duration: 194.478924ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:28:14.122124Z","caller":"traceutil/trace.go:171","msg":"trace[897956857] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"198.797486ms","start":"2024-07-22T00:28:13.923317Z","end":"2024-07-22T00:28:14.122114Z","steps":["trace[897956857] 'process raft request'  (duration: 198.054355ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:28:14.122319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.019145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3145"}
	{"level":"info","ts":"2024-07-22T00:28:14.122378Z","caller":"traceutil/trace.go:171","msg":"trace[655934887] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:383; }","duration":"110.081907ms","start":"2024-07-22T00:28:14.012289Z","end":"2024-07-22T00:28:14.122371Z","steps":["trace[655934887] 'agreement among raft nodes before linearized reading'  (duration: 109.991871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:28:14.122533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.274384ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-22T00:28:14.122586Z","caller":"traceutil/trace.go:171","msg":"trace[2081242360] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:383; }","duration":"115.328104ms","start":"2024-07-22T00:28:14.007251Z","end":"2024-07-22T00:28:14.12258Z","steps":["trace[2081242360] 'agreement among raft nodes before linearized reading'  (duration: 115.253616ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:28:14.122675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.468782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-22T00:28:14.12272Z","caller":"traceutil/trace.go:171","msg":"trace[2025422303] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:383; }","duration":"115.513944ms","start":"2024-07-22T00:28:14.007201Z","end":"2024-07-22T00:28:14.122715Z","steps":["trace[2025422303] 'agreement among raft nodes before linearized reading'  (duration: 115.45549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:28:14.122808Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.910909ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-22T00:28:14.122855Z","caller":"traceutil/trace.go:171","msg":"trace[1337862840] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:383; }","duration":"116.957835ms","start":"2024-07-22T00:28:14.005889Z","end":"2024-07-22T00:28:14.122847Z","steps":["trace[1337862840] 'agreement among raft nodes before linearized reading'  (duration: 116.896139ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:28:14.438867Z","caller":"traceutil/trace.go:171","msg":"trace[2027075324] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"105.931741ms","start":"2024-07-22T00:28:14.332917Z","end":"2024-07-22T00:28:14.438849Z","steps":["trace[2027075324] 'process raft request'  (duration: 52.087976ms)","trace[2027075324] 'compare'  (duration: 53.476696ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T00:28:14.462054Z","caller":"traceutil/trace.go:171","msg":"trace[1370139546] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"113.326462ms","start":"2024-07-22T00:28:14.348703Z","end":"2024-07-22T00:28:14.462029Z","steps":["trace[1370139546] 'process raft request'  (duration: 89.881124ms)"],"step_count":1}
	
	
	==> gcp-auth [bb90afc7e859bec3c7d9d17676acc458e87de8f1922ef9b68d3f30354d7cc83e] <==
	2024/07/22 00:29:56 GCP Auth Webhook started!
	2024/07/22 00:31:09 Ready to marshal response ...
	2024/07/22 00:31:09 Ready to write response ...
	2024/07/22 00:31:09 Ready to marshal response ...
	2024/07/22 00:31:09 Ready to write response ...
	2024/07/22 00:31:09 Ready to marshal response ...
	2024/07/22 00:31:09 Ready to write response ...
	2024/07/22 00:31:18 Ready to marshal response ...
	2024/07/22 00:31:18 Ready to write response ...
	2024/07/22 00:31:25 Ready to marshal response ...
	2024/07/22 00:31:25 Ready to write response ...
	2024/07/22 00:31:25 Ready to marshal response ...
	2024/07/22 00:31:25 Ready to write response ...
	2024/07/22 00:31:34 Ready to marshal response ...
	2024/07/22 00:31:34 Ready to write response ...
	2024/07/22 00:31:53 Ready to marshal response ...
	2024/07/22 00:31:53 Ready to write response ...
	2024/07/22 00:32:16 Ready to marshal response ...
	2024/07/22 00:32:16 Ready to write response ...
	2024/07/22 00:32:33 Ready to marshal response ...
	2024/07/22 00:32:33 Ready to write response ...
	2024/07/22 00:34:55 Ready to marshal response ...
	2024/07/22 00:34:55 Ready to write response ...
	
	
	==> kernel <==
	 00:35:05 up 1 day,  8:17,  0 users,  load average: 0.14, 1.12, 2.09
	Linux addons-783853 5.15.0-1064-aws #70~20.04.1-Ubuntu SMP Thu Jun 27 14:52:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b] <==
	I0722 00:33:55.727415       1 main.go:299] handling current node
	W0722 00:33:56.707603       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0722 00:33:56.707642       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0722 00:34:03.477253       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:34:03.477288       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0722 00:34:05.726834       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:34:05.726872       1 main.go:299] handling current node
	I0722 00:34:15.726398       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:34:15.726434       1 main.go:299] handling current node
	W0722 00:34:21.262661       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:34:21.262696       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0722 00:34:25.727033       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:34:25.727068       1 main.go:299] handling current node
	I0722 00:34:35.726808       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:34:35.726841       1 main.go:299] handling current node
	W0722 00:34:39.865599       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0722 00:34:39.865667       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0722 00:34:45.727366       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:34:45.727412       1 main.go:299] handling current node
	I0722 00:34:55.727333       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:34:55.727385       1 main.go:299] handling current node
	W0722 00:35:02.785640       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:35:02.785672       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0722 00:35:05.726633       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:35:05.726686       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c] <==
	W0722 00:30:39.192149       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 00:30:39.192198       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 00:30:39.241274       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0722 00:31:09.055537       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.140.170"}
	E0722 00:31:50.159861       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0722 00:32:05.472420       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0722 00:32:24.066445       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0722 00:32:25.099123       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0722 00:32:32.935015       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 00:32:32.935069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 00:32:32.975669       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 00:32:32.975804       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 00:32:32.977693       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 00:32:32.977800       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 00:32:32.989517       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 00:32:32.990048       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 00:32:33.026234       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 00:32:33.026373       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 00:32:33.570050       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0722 00:32:33.900686       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.130.95"}
	W0722 00:32:33.978051       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0722 00:32:34.026880       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0722 00:32:34.044770       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0722 00:34:55.517712       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.191.50"}
	
	
	==> kube-controller-manager [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab] <==
	E0722 00:33:16.736364       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:33:31.229761       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:33:31.229802       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:33:40.735610       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:33:40.735645       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:33:57.913448       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:33:57.913485       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:34:03.480429       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:34:03.480471       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:34:21.203790       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:34:21.203834       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:34:22.167846       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:34:22.167890       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:34:46.318419       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:34:46.318458       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:34:54.112684       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:34:54.112723       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0722 00:34:55.318440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="65.973467ms"
	I0722 00:34:55.356898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="38.331151ms"
	I0722 00:34:55.358005       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="31.516µs"
	I0722 00:34:57.348244       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0722 00:34:57.353249       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="4.283µs"
	I0722 00:34:57.356297       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0722 00:34:57.623100       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="7.584458ms"
	I0722 00:34:57.623165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="30.442µs"
	
	
	==> kube-proxy [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d] <==
	I0722 00:28:15.248192       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:28:15.496104       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0722 00:28:15.575735       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0722 00:28:15.575791       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:28:15.776890       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0722 00:28:15.776928       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0722 00:28:15.776958       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:28:15.777188       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:28:15.777213       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:28:15.794390       1 config.go:192] "Starting service config controller"
	I0722 00:28:15.794495       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:28:15.794562       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:28:15.794595       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:28:15.795098       1 config.go:319] "Starting node config controller"
	I0722 00:28:15.796848       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:28:15.896844       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:28:15.898446       1 shared_informer.go:320] Caches are synced for node config
	I0722 00:28:15.898477       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28] <==
	W0722 00:27:53.792874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:27:53.792972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 00:27:53.793050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:27:53.793062       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:27:53.793132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:27:53.793145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:27:53.793183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 00:27:53.793194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 00:27:54.675234       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 00:27:54.675368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0722 00:27:54.751306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 00:27:54.751348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 00:27:54.757602       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 00:27:54.757711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 00:27:54.787887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 00:27:54.787938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 00:27:54.853623       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:27:54.853754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 00:27:54.855824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 00:27:54.855956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 00:27:54.866225       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 00:27:54.866349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 00:27:55.109686       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:27:55.109815       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0722 00:27:57.088211       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 00:34:55 addons-783853 kubelet[1534]: I0722 00:34:55.404580    1534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4f344f58-5efe-4487-9f6f-6a6363036387-gcp-creds\") pod \"hello-world-app-6778b5fc9f-pl4h9\" (UID: \"4f344f58-5efe-4487-9f6f-6a6363036387\") " pod="default/hello-world-app-6778b5fc9f-pl4h9"
	Jul 22 00:34:55 addons-783853 kubelet[1534]: I0722 00:34:55.404635    1534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sc9v\" (UniqueName: \"kubernetes.io/projected/4f344f58-5efe-4487-9f6f-6a6363036387-kube-api-access-7sc9v\") pod \"hello-world-app-6778b5fc9f-pl4h9\" (UID: \"4f344f58-5efe-4487-9f6f-6a6363036387\") " pod="default/hello-world-app-6778b5fc9f-pl4h9"
	Jul 22 00:34:56 addons-783853 kubelet[1534]: I0722 00:34:56.517341    1534 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5prp9\" (UniqueName: \"kubernetes.io/projected/4f67e797-baba-4022-b2e9-f969cb82f4fb-kube-api-access-5prp9\") pod \"4f67e797-baba-4022-b2e9-f969cb82f4fb\" (UID: \"4f67e797-baba-4022-b2e9-f969cb82f4fb\") "
	Jul 22 00:34:56 addons-783853 kubelet[1534]: I0722 00:34:56.519288    1534 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f67e797-baba-4022-b2e9-f969cb82f4fb-kube-api-access-5prp9" (OuterVolumeSpecName: "kube-api-access-5prp9") pod "4f67e797-baba-4022-b2e9-f969cb82f4fb" (UID: "4f67e797-baba-4022-b2e9-f969cb82f4fb"). InnerVolumeSpecName "kube-api-access-5prp9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 22 00:34:56 addons-783853 kubelet[1534]: I0722 00:34:56.598990    1534 scope.go:117] "RemoveContainer" containerID="2ace4ced676387dc0d3ce64ab681e99b8efe3c695ca0aea24ca1e3443beba61d"
	Jul 22 00:34:56 addons-783853 kubelet[1534]: I0722 00:34:56.617794    1534 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5prp9\" (UniqueName: \"kubernetes.io/projected/4f67e797-baba-4022-b2e9-f969cb82f4fb-kube-api-access-5prp9\") on node \"addons-783853\" DevicePath \"\""
	Jul 22 00:34:56 addons-783853 kubelet[1534]: I0722 00:34:56.635015    1534 scope.go:117] "RemoveContainer" containerID="2ace4ced676387dc0d3ce64ab681e99b8efe3c695ca0aea24ca1e3443beba61d"
	Jul 22 00:34:56 addons-783853 kubelet[1534]: E0722 00:34:56.635521    1534 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ace4ced676387dc0d3ce64ab681e99b8efe3c695ca0aea24ca1e3443beba61d\": container with ID starting with 2ace4ced676387dc0d3ce64ab681e99b8efe3c695ca0aea24ca1e3443beba61d not found: ID does not exist" containerID="2ace4ced676387dc0d3ce64ab681e99b8efe3c695ca0aea24ca1e3443beba61d"
	Jul 22 00:34:56 addons-783853 kubelet[1534]: I0722 00:34:56.635555    1534 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ace4ced676387dc0d3ce64ab681e99b8efe3c695ca0aea24ca1e3443beba61d"} err="failed to get container status \"2ace4ced676387dc0d3ce64ab681e99b8efe3c695ca0aea24ca1e3443beba61d\": rpc error: code = NotFound desc = could not find container \"2ace4ced676387dc0d3ce64ab681e99b8efe3c695ca0aea24ca1e3443beba61d\": container with ID starting with 2ace4ced676387dc0d3ce64ab681e99b8efe3c695ca0aea24ca1e3443beba61d not found: ID does not exist"
	Jul 22 00:34:58 addons-783853 kubelet[1534]: I0722 00:34:58.145633    1534 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39815a92-41c6-4d0f-af22-9a5458fc0480" path="/var/lib/kubelet/pods/39815a92-41c6-4d0f-af22-9a5458fc0480/volumes"
	Jul 22 00:34:58 addons-783853 kubelet[1534]: I0722 00:34:58.146046    1534 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f67e797-baba-4022-b2e9-f969cb82f4fb" path="/var/lib/kubelet/pods/4f67e797-baba-4022-b2e9-f969cb82f4fb/volumes"
	Jul 22 00:34:58 addons-783853 kubelet[1534]: I0722 00:34:58.146424    1534 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5c5341b-a507-4a94-b301-ac25d5f9d4ed" path="/var/lib/kubelet/pods/f5c5341b-a507-4a94-b301-ac25d5f9d4ed/volumes"
	Jul 22 00:34:58 addons-783853 kubelet[1534]: I0722 00:34:58.223093    1534 scope.go:117] "RemoveContainer" containerID="5debcbae3deb69cf2e0876e541415e1d1af6c5e5129b933207b1cf7a5757a849"
	Jul 22 00:34:58 addons-783853 kubelet[1534]: I0722 00:34:58.239675    1534 scope.go:117] "RemoveContainer" containerID="de4f416bafa9c8cb936d99382894bf2c8dff817756b430ed7c220e637dcd92a3"
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.613415    1534 scope.go:117] "RemoveContainer" containerID="c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661"
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.629728    1534 scope.go:117] "RemoveContainer" containerID="c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661"
	Jul 22 00:35:00 addons-783853 kubelet[1534]: E0722 00:35:00.630315    1534 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661\": container with ID starting with c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661 not found: ID does not exist" containerID="c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661"
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.630360    1534 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661"} err="failed to get container status \"c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661\": rpc error: code = NotFound desc = could not find container \"c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661\": container with ID starting with c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661 not found: ID does not exist"
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.641726    1534 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47b77f41-f681-441d-bf14-c37ac84b670d-webhook-cert\") pod \"47b77f41-f681-441d-bf14-c37ac84b670d\" (UID: \"47b77f41-f681-441d-bf14-c37ac84b670d\") "
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.641786    1534 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz8pm\" (UniqueName: \"kubernetes.io/projected/47b77f41-f681-441d-bf14-c37ac84b670d-kube-api-access-wz8pm\") pod \"47b77f41-f681-441d-bf14-c37ac84b670d\" (UID: \"47b77f41-f681-441d-bf14-c37ac84b670d\") "
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.644064    1534 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b77f41-f681-441d-bf14-c37ac84b670d-kube-api-access-wz8pm" (OuterVolumeSpecName: "kube-api-access-wz8pm") pod "47b77f41-f681-441d-bf14-c37ac84b670d" (UID: "47b77f41-f681-441d-bf14-c37ac84b670d"). InnerVolumeSpecName "kube-api-access-wz8pm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.646837    1534 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b77f41-f681-441d-bf14-c37ac84b670d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "47b77f41-f681-441d-bf14-c37ac84b670d" (UID: "47b77f41-f681-441d-bf14-c37ac84b670d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.742638    1534 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47b77f41-f681-441d-bf14-c37ac84b670d-webhook-cert\") on node \"addons-783853\" DevicePath \"\""
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.742679    1534 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wz8pm\" (UniqueName: \"kubernetes.io/projected/47b77f41-f681-441d-bf14-c37ac84b670d-kube-api-access-wz8pm\") on node \"addons-783853\" DevicePath \"\""
	Jul 22 00:35:02 addons-783853 kubelet[1534]: I0722 00:35:02.146060    1534 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47b77f41-f681-441d-bf14-c37ac84b670d" path="/var/lib/kubelet/pods/47b77f41-f681-441d-bf14-c37ac84b670d/volumes"
	
	
	==> storage-provisioner [461123cde69274b0178f9b430cab234c44f0fea1cb24d5aea19d9e852053d4cc] <==
	I0722 00:28:57.018809       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:28:57.038428       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:28:57.038483       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 00:28:57.047567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 00:28:57.047729       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-783853_c4cfa4ab-8726-4790-b31b-4df7b6a36898!
	I0722 00:28:57.047786       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0480adff-28ec-454a-a5e8-4dbbc5a90dfd", APIVersion:"v1", ResourceVersion:"911", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-783853_c4cfa4ab-8726-4790-b31b-4df7b6a36898 became leader
	I0722 00:28:57.148393       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-783853_c4cfa4ab-8726-4790-b31b-4df7b6a36898!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-783853 -n addons-783853
helpers_test.go:261: (dbg) Run:  kubectl --context addons-783853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (281.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.411941ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-znqdq" [3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009089931s
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (88.813258ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 4m25.42201144s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (96.517529ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 4m29.838260391s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (87.066364ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 4m34.695930377s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (88.024275ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 4m39.981301344s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (94.115521ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 4m52.515935457s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (85.054286ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 5m4.853811801s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (85.050142ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 5m24.642996246s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (94.320685ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 5m49.036070938s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (87.410567ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 6m20.900661883s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (82.545717ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 7m28.880690523s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (88.747881ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 8m27.336710256s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-783853 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-783853 top pods -n kube-system: exit status 1 (86.600794ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7mkbx, age: 8m59.076206965s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-783853
helpers_test.go:235: (dbg) docker inspect addons-783853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d",
	        "Created": "2024-07-22T00:27:34.861807245Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 533726,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-22T00:27:34.997655166Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2c91a2178aa1acdb3eade350c62303b0cf135b362b91c6aa21cd060c2dbfcac",
	        "ResolvConfPath": "/var/lib/docker/containers/4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d/hostname",
	        "HostsPath": "/var/lib/docker/containers/4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d/hosts",
	        "LogPath": "/var/lib/docker/containers/4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d/4abbb53d7e22a670a6ee51af508267e7f03cef42275cc04f9194102316e1c41d-json.log",
	        "Name": "/addons-783853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-783853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-783853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9488449c619f1392ba3b0b1c7a2d4ec41bf726d2377d30379afac14f034b69a5-init/diff:/var/lib/docker/overlay2/0bbbe9537bb983273c69d2396c833f2bdeab0de0333f7a8438fa8a8aec393d0a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9488449c619f1392ba3b0b1c7a2d4ec41bf726d2377d30379afac14f034b69a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9488449c619f1392ba3b0b1c7a2d4ec41bf726d2377d30379afac14f034b69a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9488449c619f1392ba3b0b1c7a2d4ec41bf726d2377d30379afac14f034b69a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-783853",
	                "Source": "/var/lib/docker/volumes/addons-783853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-783853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-783853",
	                "name.minikube.sigs.k8s.io": "addons-783853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8c00c57964e088368b55ff0c9061679f484e7ceb21197bf8a5b1c4c0f9dd914",
	            "SandboxKey": "/var/run/docker/netns/d8c00c57964e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38981"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38982"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38985"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38983"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38984"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-783853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bd40498e70f76a9ad7520c6de89a05a6866dcab232044897434d10ec91edbae9",
	                    "EndpointID": "8baad34fd63d857c6d2a1bbcc0ee2d8097c64494a34bec2f90585656525c594c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-783853",
	                        "4abbb53d7e22"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-783853 -n addons-783853
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-783853 logs -n 25: (1.573187998s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-899574                                                                     | download-only-899574   | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:27 UTC |
	| delete  | -p download-only-182209                                                                     | download-only-182209   | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:27 UTC |
	| delete  | -p download-only-177991                                                                     | download-only-177991   | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:27 UTC |
	| start   | --download-only -p                                                                          | download-docker-688994 | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC |                     |
	|         | download-docker-688994                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-688994                                                                   | download-docker-688994 | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-175978   | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC |                     |
	|         | binary-mirror-175978                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32849                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-175978                                                                     | binary-mirror-175978   | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:27 UTC |
	| addons  | disable dashboard -p                                                                        | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC |                     |
	|         | addons-783853                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC |                     |
	|         | addons-783853                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-783853 --wait=true                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:27 UTC | 22 Jul 24 00:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	|         | -p addons-783853                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-783853 ip                                                                            | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	| addons  | addons-783853 addons disable                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	|         | -p addons-783853                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-783853 ssh cat                                                                       | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	|         | /opt/local-path-provisioner/pvc-a10fb3fc-c913-4254-9002-57f08ecaf0f2_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-783853 addons disable                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:31 UTC | 22 Jul 24 00:31 UTC |
	|         | addons-783853                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:32 UTC | 22 Jul 24 00:32 UTC |
	|         | addons-783853                                                                               |                        |         |         |                     |                     |
	| addons  | addons-783853 addons                                                                        | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:32 UTC | 22 Jul 24 00:32 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-783853 addons                                                                        | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:32 UTC | 22 Jul 24 00:32 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-783853 ssh curl -s                                                                   | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:32 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-783853 ip                                                                            | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC | 22 Jul 24 00:34 UTC |
	| addons  | addons-783853 addons disable                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC | 22 Jul 24 00:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-783853 addons disable                                                                | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC | 22 Jul 24 00:35 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-783853 addons                                                                        | addons-783853          | jenkins | v1.33.1 | 22 Jul 24 00:37 UTC | 22 Jul 24 00:37 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:27:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:27:10.778649  533196 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:27:10.778848  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:27:10.778877  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:27:10.778897  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:27:10.779179  533196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 00:27:10.779639  533196 out.go:298] Setting JSON to false
	I0722 00:27:10.780576  533196 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":115782,"bootTime":1721492249,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0722 00:27:10.780673  533196 start.go:139] virtualization:  
	I0722 00:27:10.783124  533196 out.go:177] * [addons-783853] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0722 00:27:10.785451  533196 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:27:10.785522  533196 notify.go:220] Checking for updates...
	I0722 00:27:10.789138  533196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:27:10.790883  533196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	I0722 00:27:10.793729  533196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	I0722 00:27:10.795743  533196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0722 00:27:10.797763  533196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:27:10.799828  533196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:27:10.828132  533196 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0722 00:27:10.828247  533196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:27:10.878889  533196 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-22 00:27:10.869681834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:27:10.879003  533196 docker.go:307] overlay module found
	I0722 00:27:10.880841  533196 out.go:177] * Using the docker driver based on user configuration
	I0722 00:27:10.882389  533196 start.go:297] selected driver: docker
	I0722 00:27:10.882408  533196 start.go:901] validating driver "docker" against <nil>
	I0722 00:27:10.882434  533196 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:27:10.883068  533196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:27:10.945570  533196 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-22 00:27:10.936150292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:27:10.945745  533196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 00:27:10.945995  533196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:27:10.948035  533196 out.go:177] * Using Docker driver with root privileges
	I0722 00:27:10.949663  533196 cni.go:84] Creating CNI manager for ""
	I0722 00:27:10.949683  533196 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0722 00:27:10.949700  533196 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 00:27:10.949834  533196 start.go:340] cluster config:
	{Name:addons-783853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-783853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:27:10.951869  533196 out.go:177] * Starting "addons-783853" primary control-plane node in "addons-783853" cluster
	I0722 00:27:10.953469  533196 cache.go:121] Beginning downloading kic base image for docker with crio
	I0722 00:27:10.955063  533196 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0722 00:27:10.956561  533196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:27:10.956612  533196 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0722 00:27:10.956625  533196 cache.go:56] Caching tarball of preloaded images
	I0722 00:27:10.956710  533196 preload.go:172] Found /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0722 00:27:10.956724  533196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:27:10.957130  533196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/config.json ...
	I0722 00:27:10.957164  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/config.json: {Name:mkec22b347b3f4f8439a05f8b676bc43b45a69f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:10.957331  533196 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0722 00:27:10.971575  533196 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0722 00:27:10.971705  533196 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0722 00:27:10.971728  533196 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0722 00:27:10.971736  533196 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0722 00:27:10.971744  533196 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0722 00:27:10.971752  533196 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0722 00:27:27.662993  533196 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0722 00:27:27.663034  533196 cache.go:194] Successfully downloaded all kic artifacts
	I0722 00:27:27.663085  533196 start.go:360] acquireMachinesLock for addons-783853: {Name:mk23ed81c9ab4a4da7fcd8d2ab7dd25d44ee9926 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:27:27.663787  533196 start.go:364] duration metric: took 674.922µs to acquireMachinesLock for "addons-783853"
	I0722 00:27:27.663825  533196 start.go:93] Provisioning new machine with config: &{Name:addons-783853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-783853 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:27:27.663912  533196 start.go:125] createHost starting for "" (driver="docker")
	I0722 00:27:27.666210  533196 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0722 00:27:27.666444  533196 start.go:159] libmachine.API.Create for "addons-783853" (driver="docker")
	I0722 00:27:27.666477  533196 client.go:168] LocalClient.Create starting
	I0722 00:27:27.666595  533196 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem
	I0722 00:27:28.087092  533196 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/cert.pem
	I0722 00:27:28.332499  533196 cli_runner.go:164] Run: docker network inspect addons-783853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0722 00:27:28.348801  533196 cli_runner.go:211] docker network inspect addons-783853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0722 00:27:28.348889  533196 network_create.go:284] running [docker network inspect addons-783853] to gather additional debugging logs...
	I0722 00:27:28.348908  533196 cli_runner.go:164] Run: docker network inspect addons-783853
	W0722 00:27:28.362490  533196 cli_runner.go:211] docker network inspect addons-783853 returned with exit code 1
	I0722 00:27:28.362525  533196 network_create.go:287] error running [docker network inspect addons-783853]: docker network inspect addons-783853: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-783853 not found
	I0722 00:27:28.362538  533196 network_create.go:289] output of [docker network inspect addons-783853]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-783853 not found
	
	** /stderr **
	I0722 00:27:28.362634  533196 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0722 00:27:28.376513  533196 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000490170}
	I0722 00:27:28.376554  533196 network_create.go:124] attempt to create docker network addons-783853 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0722 00:27:28.376610  533196 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-783853 addons-783853
	I0722 00:27:28.443047  533196 network_create.go:108] docker network addons-783853 192.168.49.0/24 created
	I0722 00:27:28.443087  533196 kic.go:121] calculated static IP "192.168.49.2" for the "addons-783853" container
	I0722 00:27:28.443160  533196 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0722 00:27:28.458021  533196 cli_runner.go:164] Run: docker volume create addons-783853 --label name.minikube.sigs.k8s.io=addons-783853 --label created_by.minikube.sigs.k8s.io=true
	I0722 00:27:28.474547  533196 oci.go:103] Successfully created a docker volume addons-783853
	I0722 00:27:28.474641  533196 cli_runner.go:164] Run: docker run --rm --name addons-783853-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-783853 --entrypoint /usr/bin/test -v addons-783853:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0722 00:27:30.570394  533196 cli_runner.go:217] Completed: docker run --rm --name addons-783853-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-783853 --entrypoint /usr/bin/test -v addons-783853:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib: (2.095711473s)
	I0722 00:27:30.570428  533196 oci.go:107] Successfully prepared a docker volume addons-783853
	I0722 00:27:30.570444  533196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:27:30.570463  533196 kic.go:194] Starting extracting preloaded images to volume ...
	I0722 00:27:30.570548  533196 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-783853:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0722 00:27:34.799206  533196 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-783853:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir: (4.22861531s)
	I0722 00:27:34.799240  533196 kic.go:203] duration metric: took 4.228773712s to extract preloaded images to volume ...
	W0722 00:27:34.799393  533196 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0722 00:27:34.799512  533196 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0722 00:27:34.847515  533196 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-783853 --name addons-783853 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-783853 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-783853 --network addons-783853 --ip 192.168.49.2 --volume addons-783853:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f
	I0722 00:27:35.172931  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Running}}
	I0722 00:27:35.193276  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:27:35.215373  533196 cli_runner.go:164] Run: docker exec addons-783853 stat /var/lib/dpkg/alternatives/iptables
	I0722 00:27:35.267632  533196 oci.go:144] the created container "addons-783853" has a running status.
	I0722 00:27:35.267660  533196 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa...
	I0722 00:27:35.849542  533196 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0722 00:27:35.873284  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:27:35.902409  533196 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0722 00:27:35.902429  533196 kic_runner.go:114] Args: [docker exec --privileged addons-783853 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0722 00:27:35.974149  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:27:35.999781  533196 machine.go:94] provisionDockerMachine start ...
	I0722 00:27:35.999870  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:36.029004  533196 main.go:141] libmachine: Using SSH client type: native
	I0722 00:27:36.029340  533196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38981 <nil> <nil>}
	I0722 00:27:36.029352  533196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:27:36.188478  533196 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-783853
	
	I0722 00:27:36.188501  533196 ubuntu.go:169] provisioning hostname "addons-783853"
	I0722 00:27:36.188569  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:36.211903  533196 main.go:141] libmachine: Using SSH client type: native
	I0722 00:27:36.212172  533196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38981 <nil> <nil>}
	I0722 00:27:36.212185  533196 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-783853 && echo "addons-783853" | sudo tee /etc/hostname
	I0722 00:27:36.356251  533196 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-783853
	
	I0722 00:27:36.356330  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:36.372898  533196 main.go:141] libmachine: Using SSH client type: native
	I0722 00:27:36.373143  533196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38981 <nil> <nil>}
	I0722 00:27:36.373159  533196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-783853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-783853/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-783853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:27:36.492627  533196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:27:36.492655  533196 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19312-526659/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-526659/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-526659/.minikube}
	I0722 00:27:36.492680  533196 ubuntu.go:177] setting up certificates
	I0722 00:27:36.492693  533196 provision.go:84] configureAuth start
	I0722 00:27:36.492781  533196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-783853
	I0722 00:27:36.509920  533196 provision.go:143] copyHostCerts
	I0722 00:27:36.510081  533196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-526659/.minikube/cert.pem (1123 bytes)
	I0722 00:27:36.510210  533196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-526659/.minikube/key.pem (1675 bytes)
	I0722 00:27:36.510278  533196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-526659/.minikube/ca.pem (1078 bytes)
	I0722 00:27:36.510331  533196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-526659/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca-key.pem org=jenkins.addons-783853 san=[127.0.0.1 192.168.49.2 addons-783853 localhost minikube]
	I0722 00:27:36.717226  533196 provision.go:177] copyRemoteCerts
	I0722 00:27:36.717314  533196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:27:36.717361  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:36.733461  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:27:36.821548  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 00:27:36.847624  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 00:27:36.872428  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:27:36.898898  533196 provision.go:87] duration metric: took 406.191296ms to configureAuth
	I0722 00:27:36.898925  533196 ubuntu.go:193] setting minikube options for container-runtime
	I0722 00:27:36.899109  533196 config.go:182] Loaded profile config "addons-783853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:27:36.899226  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:36.916948  533196 main.go:141] libmachine: Using SSH client type: native
	I0722 00:27:36.917218  533196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38981 <nil> <nil>}
	I0722 00:27:36.917239  533196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:27:37.136697  533196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:27:37.136718  533196 machine.go:97] duration metric: took 1.136918196s to provisionDockerMachine
	I0722 00:27:37.136834  533196 client.go:171] duration metric: took 9.470245646s to LocalClient.Create
	I0722 00:27:37.136850  533196 start.go:167] duration metric: took 9.470405451s to libmachine.API.Create "addons-783853"
	I0722 00:27:37.136857  533196 start.go:293] postStartSetup for "addons-783853" (driver="docker")
	I0722 00:27:37.136868  533196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:27:37.136936  533196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:27:37.136982  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:37.154774  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:27:37.245763  533196 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:27:37.248889  533196 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0722 00:27:37.248958  533196 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0722 00:27:37.248974  533196 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0722 00:27:37.248982  533196 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0722 00:27:37.249008  533196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-526659/.minikube/addons for local assets ...
	I0722 00:27:37.249095  533196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-526659/.minikube/files for local assets ...
	I0722 00:27:37.249153  533196 start.go:296] duration metric: took 112.290287ms for postStartSetup
	I0722 00:27:37.249483  533196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-783853
	I0722 00:27:37.265523  533196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/config.json ...
	I0722 00:27:37.265816  533196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 00:27:37.265872  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:37.282055  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:27:37.369546  533196 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0722 00:27:37.373949  533196 start.go:128] duration metric: took 9.710019711s to createHost
	I0722 00:27:37.373973  533196 start.go:83] releasing machines lock for "addons-783853", held for 9.710168899s
	I0722 00:27:37.374069  533196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-783853
	I0722 00:27:37.389723  533196 ssh_runner.go:195] Run: cat /version.json
	I0722 00:27:37.389737  533196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:27:37.389780  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:37.389801  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:27:37.407064  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:27:37.414085  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:27:37.622495  533196 ssh_runner.go:195] Run: systemctl --version
	I0722 00:27:37.626908  533196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:27:37.767094  533196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 00:27:37.771352  533196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:27:37.791178  533196 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0722 00:27:37.791291  533196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:27:37.824031  533196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0722 00:27:37.824056  533196 start.go:495] detecting cgroup driver to use...
	I0722 00:27:37.824108  533196 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0722 00:27:37.824165  533196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:27:37.839861  533196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:27:37.851948  533196 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:27:37.852056  533196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:27:37.866175  533196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:27:37.881025  533196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:27:37.970821  533196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:27:38.065566  533196 docker.go:233] disabling docker service ...
	I0722 00:27:38.065643  533196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:27:38.086531  533196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:27:38.098685  533196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:27:38.186931  533196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:27:38.283605  533196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:27:38.295280  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:27:38.312062  533196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:27:38.312177  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.322414  533196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:27:38.322516  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.332332  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.342315  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.352036  533196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:27:38.361966  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.371803  533196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.388074  533196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:27:38.399220  533196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:27:38.407632  533196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:27:38.416060  533196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:27:38.494242  533196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:27:38.605136  533196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:27:38.605234  533196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:27:38.608783  533196 start.go:563] Will wait 60s for crictl version
	I0722 00:27:38.608863  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:27:38.612050  533196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:27:38.648157  533196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0722 00:27:38.648289  533196 ssh_runner.go:195] Run: crio --version
	I0722 00:27:38.687573  533196 ssh_runner.go:195] Run: crio --version
	I0722 00:27:38.731283  533196 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0722 00:27:38.733252  533196 cli_runner.go:164] Run: docker network inspect addons-783853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0722 00:27:38.749343  533196 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0722 00:27:38.752787  533196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:27:38.763268  533196 kubeadm.go:883] updating cluster {Name:addons-783853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-783853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:27:38.763397  533196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:27:38.763462  533196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:27:38.839559  533196 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:27:38.839581  533196 crio.go:433] Images already preloaded, skipping extraction
	I0722 00:27:38.839648  533196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:27:38.875020  533196 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:27:38.875046  533196 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:27:38.875056  533196 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0722 00:27:38.875157  533196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-783853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-783853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:27:38.875239  533196 ssh_runner.go:195] Run: crio config
	I0722 00:27:38.928648  533196 cni.go:84] Creating CNI manager for ""
	I0722 00:27:38.928679  533196 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0722 00:27:38.928691  533196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:27:38.928715  533196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-783853 NodeName:addons-783853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:27:38.928907  533196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-783853"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:27:38.928990  533196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:27:38.938002  533196 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:27:38.938077  533196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:27:38.946956  533196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0722 00:27:38.966495  533196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:27:38.985031  533196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0722 00:27:39.005010  533196 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0722 00:27:39.010874  533196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:27:39.022265  533196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:27:39.104848  533196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:27:39.119132  533196 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853 for IP: 192.168.49.2
	I0722 00:27:39.119196  533196 certs.go:194] generating shared ca certs ...
	I0722 00:27:39.119226  533196 certs.go:226] acquiring lock for ca certs: {Name:mkdc7fe7e192116c10cb8e16455129169d01b878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:39.119393  533196 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-526659/.minikube/ca.key
	I0722 00:27:39.476055  533196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-526659/.minikube/ca.crt ...
	I0722 00:27:39.476131  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/ca.crt: {Name:mkb33f73b23802ede958554614e4b008c48b2f10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:39.476357  533196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-526659/.minikube/ca.key ...
	I0722 00:27:39.476390  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/ca.key: {Name:mk1fd7d6078677bea533048c8859053762632ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:39.476937  533196 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.key
	I0722 00:27:39.974794  533196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.crt ...
	I0722 00:27:39.974865  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.crt: {Name:mkb189d123154e6025a41e754cb075267d1419d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:39.975693  533196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.key ...
	I0722 00:27:39.975743  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.key: {Name:mkd77b44cda8132180ad1a361631e311da024968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:39.976789  533196 certs.go:256] generating profile certs ...
	I0722 00:27:39.976871  533196 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.key
	I0722 00:27:39.976892  533196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt with IP's: []
	I0722 00:27:40.140302  533196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt ...
	I0722 00:27:40.140335  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: {Name:mkb7266298024a27cdd2f72065f78f1a4a0e8164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.140997  533196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.key ...
	I0722 00:27:40.141017  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.key: {Name:mk24a76981df5d3ce591084fd9ee6d4a6b9c8150 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.141103  533196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key.3ff2709b
	I0722 00:27:40.141119  533196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt.3ff2709b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0722 00:27:40.452574  533196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt.3ff2709b ...
	I0722 00:27:40.452603  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt.3ff2709b: {Name:mkbb85f86f96dbff49a24a33699cbb07a9206e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.453231  533196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key.3ff2709b ...
	I0722 00:27:40.453252  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key.3ff2709b: {Name:mke0a6723860c1ff374f83d466535ba261d09ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.453352  533196 certs.go:381] copying /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt.3ff2709b -> /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt
	I0722 00:27:40.453432  533196 certs.go:385] copying /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key.3ff2709b -> /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key
	I0722 00:27:40.453487  533196 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.key
	I0722 00:27:40.453506  533196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.crt with IP's: []
	I0722 00:27:40.677557  533196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.crt ...
	I0722 00:27:40.677635  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.crt: {Name:mk5fdd9829212620bde6a507271dd05648de3b22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.677910  533196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.key ...
	I0722 00:27:40.677947  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.key: {Name:mk925f703c0b5e1260489129120df520eb854e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:27:40.678262  533196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 00:27:40.678340  533196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/ca.pem (1078 bytes)
	I0722 00:27:40.678411  533196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:27:40.678460  533196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-526659/.minikube/certs/key.pem (1675 bytes)
	I0722 00:27:40.679198  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:27:40.710743  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 00:27:40.744322  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:27:40.770269  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:27:40.795634  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0722 00:27:40.818926  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:27:40.842636  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:27:40.866479  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:27:40.889805  533196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-526659/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:27:40.913572  533196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:27:40.932012  533196 ssh_runner.go:195] Run: openssl version
	I0722 00:27:40.937804  533196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:27:40.947545  533196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:27:40.950885  533196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:27:40.950957  533196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:27:40.958046  533196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:27:40.967250  533196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:27:40.970460  533196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 00:27:40.970519  533196 kubeadm.go:392] StartCluster: {Name:addons-783853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-783853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:27:40.970605  533196 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:27:40.970662  533196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:27:41.011156  533196 cri.go:89] found id: ""
	I0722 00:27:41.011232  533196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:27:41.020304  533196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:27:41.029320  533196 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0722 00:27:41.029385  533196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:27:41.041300  533196 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:27:41.041371  533196 kubeadm.go:157] found existing configuration files:
	
	I0722 00:27:41.041458  533196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:27:41.050831  533196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:27:41.050945  533196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:27:41.059287  533196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:27:41.068191  533196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:27:41.068287  533196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:27:41.077204  533196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:27:41.086360  533196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:27:41.086457  533196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:27:41.095146  533196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:27:41.104046  533196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:27:41.104131  533196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:27:41.112598  533196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0722 00:27:41.198517  533196 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1064-aws\n", err: exit status 1
	I0722 00:27:41.270716  533196 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:27:56.830932  533196 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:27:56.830992  533196 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:27:56.831083  533196 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0722 00:27:56.831160  533196 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1064-aws
	I0722 00:27:56.831200  533196 kubeadm.go:310] OS: Linux
	I0722 00:27:56.831251  533196 kubeadm.go:310] CGROUPS_CPU: enabled
	I0722 00:27:56.831309  533196 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0722 00:27:56.831355  533196 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0722 00:27:56.831402  533196 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0722 00:27:56.831449  533196 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0722 00:27:56.831496  533196 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0722 00:27:56.831540  533196 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0722 00:27:56.831587  533196 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0722 00:27:56.831633  533196 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0722 00:27:56.831703  533196 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:27:56.831796  533196 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:27:56.831887  533196 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:27:56.831950  533196 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:27:56.833999  533196 out.go:204]   - Generating certificates and keys ...
	I0722 00:27:56.834094  533196 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:27:56.834163  533196 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:27:56.834231  533196 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 00:27:56.834290  533196 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 00:27:56.834353  533196 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 00:27:56.834405  533196 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 00:27:56.834460  533196 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 00:27:56.834576  533196 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-783853 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0722 00:27:56.834632  533196 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 00:27:56.834754  533196 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-783853 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0722 00:27:56.834822  533196 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 00:27:56.834887  533196 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 00:27:56.834934  533196 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 00:27:56.834991  533196 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:27:56.835044  533196 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:27:56.835103  533196 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:27:56.835160  533196 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:27:56.835225  533196 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:27:56.835286  533196 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:27:56.835369  533196 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:27:56.835441  533196 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:27:56.837105  533196 out.go:204]   - Booting up control plane ...
	I0722 00:27:56.837238  533196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:27:56.837369  533196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:27:56.837454  533196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:27:56.837573  533196 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:27:56.837664  533196 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:27:56.837709  533196 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:27:56.837842  533196 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:27:56.837915  533196 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:27:56.837976  533196 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501628622s
	I0722 00:27:56.838052  533196 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:27:56.838113  533196 kubeadm.go:310] [api-check] The API server is healthy after 6.001737803s
	I0722 00:27:56.838217  533196 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:27:56.838339  533196 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:27:56.838399  533196 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:27:56.838575  533196 kubeadm.go:310] [mark-control-plane] Marking the node addons-783853 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:27:56.838632  533196 kubeadm.go:310] [bootstrap-token] Using token: e7b4i5.vbym5s3kk6kc87y8
	I0722 00:27:56.840381  533196 out.go:204]   - Configuring RBAC rules ...
	I0722 00:27:56.840493  533196 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:27:56.840582  533196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:27:56.840722  533196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:27:56.840901  533196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:27:56.841025  533196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:27:56.841113  533196 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:27:56.841228  533196 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:27:56.841279  533196 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:27:56.841332  533196 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:27:56.841340  533196 kubeadm.go:310] 
	I0722 00:27:56.841398  533196 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:27:56.841406  533196 kubeadm.go:310] 
	I0722 00:27:56.841480  533196 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:27:56.841489  533196 kubeadm.go:310] 
	I0722 00:27:56.841513  533196 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:27:56.841573  533196 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:27:56.841627  533196 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:27:56.841631  533196 kubeadm.go:310] 
	I0722 00:27:56.841683  533196 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:27:56.841690  533196 kubeadm.go:310] 
	I0722 00:27:56.841736  533196 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:27:56.841743  533196 kubeadm.go:310] 
	I0722 00:27:56.841794  533196 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:27:56.841868  533196 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:27:56.841936  533196 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:27:56.841943  533196 kubeadm.go:310] 
	I0722 00:27:56.842030  533196 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:27:56.842108  533196 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:27:56.842115  533196 kubeadm.go:310] 
	I0722 00:27:56.842196  533196 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e7b4i5.vbym5s3kk6kc87y8 \
	I0722 00:27:56.842299  533196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7164c6734272d868157842346e8690c5e25f90de83e5fe6d168aaf43b24e1417 \
	I0722 00:27:56.842322  533196 kubeadm.go:310] 	--control-plane 
	I0722 00:27:56.842336  533196 kubeadm.go:310] 
	I0722 00:27:56.842417  533196 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:27:56.842424  533196 kubeadm.go:310] 
	I0722 00:27:56.842503  533196 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e7b4i5.vbym5s3kk6kc87y8 \
	I0722 00:27:56.842618  533196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7164c6734272d868157842346e8690c5e25f90de83e5fe6d168aaf43b24e1417 
	I0722 00:27:56.842630  533196 cni.go:84] Creating CNI manager for ""
	I0722 00:27:56.842638  533196 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0722 00:27:56.844482  533196 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0722 00:27:56.846195  533196 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0722 00:27:56.850824  533196 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0722 00:27:56.850845  533196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0722 00:27:56.870191  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0722 00:27:57.177431  533196 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:27:57.177570  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:57.177654  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-783853 minikube.k8s.io/updated_at=2024_07_22T00_27_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=addons-783853 minikube.k8s.io/primary=true
	I0722 00:27:57.327652  533196 ops.go:34] apiserver oom_adj: -16
	I0722 00:27:57.327770  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:57.828433  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:58.328562  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:58.827890  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:59.328555  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:27:59.827926  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:00.328588  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:00.828109  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:01.328865  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:01.828851  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:02.328684  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:02.827921  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:03.327955  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:03.828201  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:04.328865  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:04.828616  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:05.328859  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:05.828403  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:06.328500  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:06.827924  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:07.327974  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:07.827958  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:08.328403  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:08.827874  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:09.328377  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:09.828355  533196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:28:09.942815  533196 kubeadm.go:1113] duration metric: took 12.765292953s to wait for elevateKubeSystemPrivileges
	I0722 00:28:09.942849  533196 kubeadm.go:394] duration metric: took 28.972333809s to StartCluster
	I0722 00:28:09.942867  533196 settings.go:142] acquiring lock: {Name:mk10d2325078b8f55c71d679c871958034fe6b22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:28:09.943529  533196 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-526659/kubeconfig
	I0722 00:28:09.943920  533196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-526659/kubeconfig: {Name:mk85dda85ca5bc25fe23397cf817bcf2d3bbdbc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:28:09.944578  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0722 00:28:09.944601  533196 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:28:09.944893  533196 config.go:182] Loaded profile config "addons-783853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:28:09.945005  533196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0722 00:28:09.945088  533196 addons.go:69] Setting yakd=true in profile "addons-783853"
	I0722 00:28:09.945109  533196 addons.go:234] Setting addon yakd=true in "addons-783853"
	I0722 00:28:09.945133  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.945619  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.946123  533196 addons.go:69] Setting cloud-spanner=true in profile "addons-783853"
	I0722 00:28:09.946134  533196 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-783853"
	I0722 00:28:09.946152  533196 addons.go:234] Setting addon cloud-spanner=true in "addons-783853"
	I0722 00:28:09.946178  533196 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-783853"
	I0722 00:28:09.946184  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.946200  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.946573  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.946626  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.946127  533196 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-783853"
	I0722 00:28:09.948993  533196 addons.go:69] Setting default-storageclass=true in profile "addons-783853"
	I0722 00:28:09.949039  533196 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-783853"
	I0722 00:28:09.949151  533196 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-783853"
	I0722 00:28:09.949256  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.949321  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.956856  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949327  533196 addons.go:69] Setting gcp-auth=true in profile "addons-783853"
	I0722 00:28:09.957272  533196 mustload.go:65] Loading cluster: addons-783853
	I0722 00:28:09.957493  533196 config.go:182] Loaded profile config "addons-783853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:28:09.957789  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949336  533196 addons.go:69] Setting ingress=true in profile "addons-783853"
	I0722 00:28:09.968689  533196 addons.go:234] Setting addon ingress=true in "addons-783853"
	I0722 00:28:09.968801  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.969309  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949341  533196 addons.go:69] Setting ingress-dns=true in profile "addons-783853"
	I0722 00:28:09.977363  533196 addons.go:234] Setting addon ingress-dns=true in "addons-783853"
	I0722 00:28:09.977450  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.978395  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949345  533196 addons.go:69] Setting inspektor-gadget=true in profile "addons-783853"
	I0722 00:28:09.987800  533196 addons.go:234] Setting addon inspektor-gadget=true in "addons-783853"
	I0722 00:28:09.987865  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:09.992650  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949348  533196 addons.go:69] Setting metrics-server=true in profile "addons-783853"
	I0722 00:28:10.008028  533196 addons.go:234] Setting addon metrics-server=true in "addons-783853"
	I0722 00:28:10.008104  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.008642  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949355  533196 out.go:177] * Verifying Kubernetes components...
	I0722 00:28:09.949374  533196 addons.go:69] Setting registry=true in profile "addons-783853"
	I0722 00:28:10.047210  533196 addons.go:234] Setting addon registry=true in "addons-783853"
	I0722 00:28:10.047288  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.047814  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:10.066054  533196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:28:09.949388  533196 addons.go:69] Setting storage-provisioner=true in profile "addons-783853"
	I0722 00:28:10.072533  533196 addons.go:234] Setting addon storage-provisioner=true in "addons-783853"
	I0722 00:28:10.072603  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.080892  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949396  533196 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-783853"
	I0722 00:28:10.091613  533196 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-783853"
	I0722 00:28:10.091959  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949402  533196 addons.go:69] Setting volcano=true in profile "addons-783853"
	I0722 00:28:10.102616  533196 addons.go:234] Setting addon volcano=true in "addons-783853"
	I0722 00:28:10.102663  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.103375  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:09.949408  533196 addons.go:69] Setting volumesnapshots=true in profile "addons-783853"
	I0722 00:28:10.121151  533196 addons.go:234] Setting addon volumesnapshots=true in "addons-783853"
	I0722 00:28:10.121194  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.139330  533196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0722 00:28:10.145101  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:10.149961  533196 addons.go:234] Setting addon default-storageclass=true in "addons-783853"
	I0722 00:28:10.154249  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.154796  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:10.167489  533196 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0722 00:28:10.167673  533196 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0722 00:28:10.172812  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0722 00:28:10.174362  533196 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0722 00:28:10.174382  533196 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0722 00:28:10.175070  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.185772  533196 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0722 00:28:10.185851  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0722 00:28:10.185946  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.201468  533196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 00:28:10.204533  533196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 00:28:10.206749  533196 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0722 00:28:10.206772  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0722 00:28:10.206838  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.213513  533196 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0722 00:28:10.219988  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0722 00:28:10.221579  533196 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0722 00:28:10.221602  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0722 00:28:10.221672  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.225040  533196 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0722 00:28:10.226740  533196 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:28:10.226762  533196 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:28:10.226834  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.240019  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.270874  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0722 00:28:10.272620  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0722 00:28:10.274555  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0722 00:28:10.276123  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0722 00:28:10.279058  533196 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0722 00:28:10.282491  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0722 00:28:10.283355  533196 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-783853"
	I0722 00:28:10.283395  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:10.283786  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:10.288894  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0722 00:28:10.292651  533196 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0722 00:28:10.294230  533196 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0722 00:28:10.294269  533196 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0722 00:28:10.294356  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.300930  533196 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0722 00:28:10.300955  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0722 00:28:10.301020  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.317976  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0722 00:28:10.318017  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0722 00:28:10.318086  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.328471  533196 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:28:10.331177  533196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:28:10.331201  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:28:10.331279  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.363305  533196 out.go:177]   - Using image docker.io/registry:2.8.3
	I0722 00:28:10.363754  533196 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:28:10.363771  533196 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:28:10.363831  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.367102  533196 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0722 00:28:10.368842  533196 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0722 00:28:10.368864  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0722 00:28:10.368947  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	W0722 00:28:10.384021  533196 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0722 00:28:10.412856  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.423909  533196 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0722 00:28:10.428924  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.429299  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0722 00:28:10.429420  533196 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0722 00:28:10.432808  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.443949  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.474286  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.520621  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0722 00:28:10.532059  533196 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0722 00:28:10.532194  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.536614  533196 out.go:177]   - Using image docker.io/busybox:stable
	I0722 00:28:10.540353  533196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0722 00:28:10.540375  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0722 00:28:10.540444  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:10.560857  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.569014  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.571150  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.595469  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.601293  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.603702  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.605458  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.633275  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:10.762580  533196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:28:10.834112  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0722 00:28:10.921668  533196 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0722 00:28:10.921694  533196 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0722 00:28:11.015968  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0722 00:28:11.038003  533196 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0722 00:28:11.038031  533196 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0722 00:28:11.078449  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0722 00:28:11.088188  533196 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0722 00:28:11.088212  533196 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0722 00:28:11.117628  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0722 00:28:11.135668  533196 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0722 00:28:11.135695  533196 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0722 00:28:11.151046  533196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:28:11.151074  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0722 00:28:11.153753  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:28:11.161879  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:28:11.171077  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0722 00:28:11.171149  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0722 00:28:11.173327  533196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0722 00:28:11.173395  533196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0722 00:28:11.232562  533196 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0722 00:28:11.232633  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0722 00:28:11.249787  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0722 00:28:11.288114  533196 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0722 00:28:11.288192  533196 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0722 00:28:11.330466  533196 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0722 00:28:11.330545  533196 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0722 00:28:11.355532  533196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:28:11.355618  533196 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:28:11.355698  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0722 00:28:11.355737  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0722 00:28:11.379823  533196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0722 00:28:11.379896  533196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0722 00:28:11.415226  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0722 00:28:11.492295  533196 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0722 00:28:11.492365  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0722 00:28:11.522833  533196 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0722 00:28:11.522910  533196 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0722 00:28:11.527923  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0722 00:28:11.528000  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0722 00:28:11.530553  533196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0722 00:28:11.530637  533196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0722 00:28:11.569355  533196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:28:11.569431  533196 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:28:11.659694  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0722 00:28:11.692467  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0722 00:28:11.692547  533196 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0722 00:28:11.696861  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0722 00:28:11.696932  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0722 00:28:11.721512  533196 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0722 00:28:11.721593  533196 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0722 00:28:11.735973  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:28:11.828634  533196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0722 00:28:11.828711  533196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0722 00:28:11.843231  533196 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 00:28:11.843301  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0722 00:28:11.867473  533196 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0722 00:28:11.867547  533196 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0722 00:28:11.943673  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0722 00:28:11.943746  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0722 00:28:11.947920  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 00:28:11.991152  533196 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0722 00:28:11.991226  533196 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0722 00:28:12.095027  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0722 00:28:12.095103  533196 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0722 00:28:12.127275  533196 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0722 00:28:12.127346  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0722 00:28:12.196257  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0722 00:28:12.196329  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0722 00:28:12.213698  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0722 00:28:12.247278  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0722 00:28:12.247348  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0722 00:28:12.321434  533196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0722 00:28:12.321516  533196 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0722 00:28:12.482819  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0722 00:28:12.859679  533196 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.339020713s)
	I0722 00:28:12.859755  533196 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0722 00:28:12.860958  533196 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.098310995s)
	I0722 00:28:12.862017  533196 node_ready.go:35] waiting up to 6m0s for node "addons-783853" to be "Ready" ...
	I0722 00:28:13.926565  533196 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-783853" context rescaled to 1 replicas
	I0722 00:28:15.064945  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.23075509s)
	I0722 00:28:15.198441  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:17.147061  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.131053227s)
	I0722 00:28:17.147093  533196 addons.go:475] Verifying addon ingress=true in "addons-783853"
	I0722 00:28:17.147282  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.068805958s)
	I0722 00:28:17.147328  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.029676603s)
	I0722 00:28:17.147352  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.993578731s)
	I0722 00:28:17.147535  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.985634195s)
	I0722 00:28:17.147579  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.897774226s)
	I0722 00:28:17.147662  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.732368904s)
	I0722 00:28:17.147672  533196 addons.go:475] Verifying addon registry=true in "addons-783853"
	I0722 00:28:17.148036  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.488265501s)
	I0722 00:28:17.148201  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.412146777s)
	I0722 00:28:17.148218  533196 addons.go:475] Verifying addon metrics-server=true in "addons-783853"
	I0722 00:28:17.148326  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.200334299s)
	W0722 00:28:17.148345  533196 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0722 00:28:17.148362  533196 retry.go:31] will retry after 303.438114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0722 00:28:17.148517  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.934737239s)
	I0722 00:28:17.149784  533196 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-783853 service yakd-dashboard -n yakd-dashboard
	
	I0722 00:28:17.149892  533196 out.go:177] * Verifying registry addon...
	I0722 00:28:17.149914  533196 out.go:177] * Verifying ingress addon...
	I0722 00:28:17.153061  533196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0722 00:28:17.153965  533196 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0722 00:28:17.192619  533196 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0722 00:28:17.192710  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:17.199033  533196 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0722 00:28:17.199103  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0722 00:28:17.207025  533196 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0722 00:28:17.378476  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:17.451966  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 00:28:17.687896  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:17.707786  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:17.816903  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.333986263s)
	I0722 00:28:17.816943  533196 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-783853"
	I0722 00:28:17.818983  533196 out.go:177] * Verifying csi-hostpath-driver addon...
	I0722 00:28:17.821583  533196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0722 00:28:17.842303  533196 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0722 00:28:17.842330  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:18.173682  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:18.174901  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:18.354261  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:18.659281  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:18.659912  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:18.743315  533196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.291302097s)
	I0722 00:28:18.751449  533196 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0722 00:28:18.751537  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:18.775136  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:18.825974  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:18.899273  533196 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0722 00:28:18.934227  533196 addons.go:234] Setting addon gcp-auth=true in "addons-783853"
	I0722 00:28:18.934334  533196 host.go:66] Checking if "addons-783853" exists ...
	I0722 00:28:18.934855  533196 cli_runner.go:164] Run: docker container inspect addons-783853 --format={{.State.Status}}
	I0722 00:28:18.957613  533196 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0722 00:28:18.957665  533196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-783853
	I0722 00:28:18.980675  533196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38981 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/addons-783853/id_rsa Username:docker}
	I0722 00:28:19.086859  533196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 00:28:19.088833  533196 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0722 00:28:19.090589  533196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0722 00:28:19.090611  533196 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0722 00:28:19.125707  533196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0722 00:28:19.125729  533196 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0722 00:28:19.145688  533196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0722 00:28:19.145763  533196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0722 00:28:19.160216  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:19.161295  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:19.173883  533196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0722 00:28:19.327341  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:19.659090  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:19.680058  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:19.853822  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:19.881026  533196 addons.go:475] Verifying addon gcp-auth=true in "addons-783853"
	I0722 00:28:19.883157  533196 out.go:177] * Verifying gcp-auth addon...
	I0722 00:28:19.885759  533196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0722 00:28:19.893260  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:19.903588  533196 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0722 00:28:19.903661  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:20.167384  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:20.168881  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:20.326471  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:20.389797  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:20.658233  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:20.659180  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:20.826170  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:20.889712  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:21.158773  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:21.158876  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:21.326308  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:21.388824  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:21.659215  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:21.660448  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:21.825641  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:21.889312  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:22.157359  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:22.158410  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:22.326534  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:22.365686  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:22.390200  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:22.663252  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:22.664490  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:22.826774  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:22.890749  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:23.157896  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:23.158749  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:23.325895  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:23.390144  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:23.658045  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:23.660190  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:23.826380  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:23.889643  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:24.157503  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:24.159236  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:24.326205  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:24.389290  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:24.657710  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:24.658385  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:24.826608  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:24.865828  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:24.889511  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:25.158505  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:25.159155  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:25.326048  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:25.390585  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:25.657380  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:25.658164  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:25.826417  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:25.889743  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:26.158505  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:26.158541  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:26.325582  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:26.389824  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:26.657679  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:26.658806  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:26.826073  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:26.889399  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:27.158925  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:27.159148  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:27.326116  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:27.365407  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:27.389318  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:27.657839  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:27.658432  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:27.826796  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:27.888974  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:28.158250  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:28.159087  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:28.326008  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:28.389246  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:28.657731  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:28.660377  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:28.825453  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:28.889819  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:29.158290  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:29.158601  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:29.326145  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:29.390039  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:29.658186  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:29.659100  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:29.826136  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:29.865603  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:29.889714  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:30.157960  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:30.159213  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:30.325974  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:30.389310  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:30.657030  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:30.658840  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:30.826125  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:30.889086  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:31.158056  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:31.158589  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:31.326421  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:31.389108  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:31.658291  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:31.659164  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:31.825615  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:31.889792  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:32.157960  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:32.158566  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:32.334360  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:32.365778  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:32.389722  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:32.658119  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:32.658870  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:32.825788  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:32.889572  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:33.157115  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:33.158437  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:33.325842  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:33.389632  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:33.658406  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:33.659064  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:33.826108  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:33.889058  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:34.158243  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:34.158622  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:34.326077  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:34.366821  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:34.389279  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:34.658319  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:34.658686  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:34.826884  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:34.889383  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:35.157005  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:35.158481  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:35.326212  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:35.389106  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:35.661267  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:35.662734  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:35.826079  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:35.889207  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:36.158605  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:36.159095  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:36.326410  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:36.389659  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:36.657240  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:36.658951  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:36.826250  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:36.865609  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:36.890251  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:37.157253  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:37.158445  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:37.325955  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:37.389864  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:37.656787  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:37.658501  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:37.825881  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:37.889595  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:38.157852  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:38.158360  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:38.325485  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:38.390405  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:38.659030  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:38.659234  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:38.826108  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:38.889164  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:39.159900  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:39.160096  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:39.326308  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:39.365831  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:39.389632  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:39.657714  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:39.658532  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:39.826412  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:39.889230  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:40.157368  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:40.159435  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:40.326032  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:40.390986  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:40.658118  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:40.658817  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:40.825756  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:40.889648  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:41.157103  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:41.158506  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:41.325595  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:41.397164  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:41.658563  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:41.660803  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:41.826471  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:41.867137  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:41.889331  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:42.158593  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:42.159486  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:42.326450  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:42.389857  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:42.657996  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:42.658276  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:42.826724  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:42.889416  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:43.159250  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:43.159542  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:43.326368  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:43.389516  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:43.658652  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:43.660157  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:43.826044  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:43.889333  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:44.156782  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:44.158681  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:44.325882  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:44.365809  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:44.389137  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:44.658168  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:44.659024  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:44.826296  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:44.889266  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:45.157854  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:45.159159  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:45.327494  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:45.389571  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:45.657298  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:45.659062  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:45.825381  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:45.889857  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:46.158381  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:46.159065  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:46.326317  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:46.389556  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:46.657160  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:46.658697  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:46.827028  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:46.865472  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:46.889955  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:47.157309  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:47.158984  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:47.326431  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:47.389061  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:47.657899  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:47.658809  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:47.825555  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:47.889599  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:48.157321  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:48.159334  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:48.325764  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:48.389552  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:48.657824  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:48.660644  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:48.825761  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:48.889833  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:49.158422  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:49.158957  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:49.326524  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:49.365778  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:49.388999  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:49.658048  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:49.658795  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:49.826130  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:49.889643  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:50.158377  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:50.158766  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:50.325608  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:50.389252  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:50.658580  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:50.659114  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:50.826568  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:50.889105  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:51.158432  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:51.159061  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:51.325709  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:51.365958  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:51.389811  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:51.658990  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:51.659468  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:51.826195  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:51.890040  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:52.157706  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:52.158234  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:52.326760  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:52.390172  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:52.658729  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:52.659202  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:52.826622  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:52.890107  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:53.157821  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:53.159470  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:53.325480  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:53.389514  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:53.658885  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:53.660235  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:53.826018  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:53.865637  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:53.889561  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:54.158736  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:54.159217  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:54.326121  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:54.409929  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:54.657997  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:54.659619  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:54.825997  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:54.889828  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:55.157584  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:55.158116  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:55.326381  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:55.391467  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:55.658294  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:55.658880  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:55.825545  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:55.865730  533196 node_ready.go:53] node "addons-783853" has status "Ready":"False"
	I0722 00:28:55.888895  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:56.194025  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:56.198349  533196 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0722 00:28:56.198422  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:56.329745  533196 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0722 00:28:56.329823  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:56.374938  533196 node_ready.go:49] node "addons-783853" has status "Ready":"True"
	I0722 00:28:56.375004  533196 node_ready.go:38] duration metric: took 43.512936622s for node "addons-783853" to be "Ready" ...
	I0722 00:28:56.375028  533196 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:28:56.391416  533196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mkbx" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:56.407114  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:56.668418  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:56.670416  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:56.831514  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:56.899665  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:57.159334  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:57.160245  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:57.327071  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:57.390186  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:57.657415  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:57.659445  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:57.827069  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:57.889063  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:58.159109  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:58.159974  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:58.327789  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:58.390088  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:58.397262  533196 pod_ready.go:102] pod "coredns-7db6d8ff4d-7mkbx" in "kube-system" namespace has status "Ready":"False"
	I0722 00:28:58.690246  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:58.718400  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:58.845163  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:58.916618  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:58.923204  533196 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mkbx" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:58.923229  533196 pod_ready.go:81] duration metric: took 2.53173046s for pod "coredns-7db6d8ff4d-7mkbx" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.923255  533196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.938134  533196 pod_ready.go:92] pod "etcd-addons-783853" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:58.938162  533196 pod_ready.go:81] duration metric: took 14.898923ms for pod "etcd-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.938177  533196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.957006  533196 pod_ready.go:92] pod "kube-apiserver-addons-783853" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:58.957031  533196 pod_ready.go:81] duration metric: took 18.846838ms for pod "kube-apiserver-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.957043  533196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.973957  533196 pod_ready.go:92] pod "kube-controller-manager-addons-783853" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:58.973986  533196 pod_ready.go:81] duration metric: took 16.933615ms for pod "kube-controller-manager-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.974000  533196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v7srs" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.990127  533196 pod_ready.go:92] pod "kube-proxy-v7srs" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:58.990152  533196 pod_ready.go:81] duration metric: took 16.135895ms for pod "kube-proxy-v7srs" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:58.990164  533196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:59.158130  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:59.166060  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:59.295137  533196 pod_ready.go:92] pod "kube-scheduler-addons-783853" in "kube-system" namespace has status "Ready":"True"
	I0722 00:28:59.295162  533196 pod_ready.go:81] duration metric: took 304.989623ms for pod "kube-scheduler-addons-783853" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:59.295174  533196 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace to be "Ready" ...
	I0722 00:28:59.327714  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:59.391399  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:28:59.660654  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:28:59.661993  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:28:59.830057  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:28:59.890247  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:00.165211  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:00.172807  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:00.330067  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:00.391089  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:00.661848  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:00.663586  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:00.829393  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:00.890611  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:01.158981  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:01.159513  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:01.302485  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:01.327001  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:01.390214  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:01.658480  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:01.659841  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:01.827042  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:01.889454  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:02.159103  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:02.159512  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:02.329427  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:02.391313  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:02.661216  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:02.664148  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:02.828601  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:02.889987  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:03.163054  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:03.164546  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:03.303686  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:03.327944  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:03.390440  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:03.659636  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:03.660554  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:03.827574  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:03.889698  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:04.158687  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:04.159536  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:04.327288  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:04.389480  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:04.660906  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:04.662102  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:04.827314  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:04.889333  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:05.159770  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:05.180905  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:05.327118  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:05.390608  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:05.660303  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:05.660939  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:05.801702  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:05.827210  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:05.893977  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:06.174501  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:06.178812  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:06.338461  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:06.389967  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:06.660449  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:06.661168  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:06.827706  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:06.889111  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:07.158784  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:07.159352  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:07.327609  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:07.389775  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:07.659474  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:07.660487  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:07.803119  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:07.833814  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:07.889470  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:08.163631  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:08.173205  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:08.330739  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:08.390567  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:08.659521  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:08.660282  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:08.827258  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:08.889331  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:09.164623  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:09.165526  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:09.327283  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:09.394906  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:09.658113  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:09.659780  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:09.828885  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:09.889507  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:10.158273  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:10.160286  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:10.301547  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:10.326830  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:10.389263  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:10.662288  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:10.665009  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:10.829296  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:10.890208  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:11.159462  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:11.162164  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:11.328321  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:11.391691  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:11.659850  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:11.660763  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:11.827240  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:11.889531  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:12.158672  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:12.159900  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:12.301998  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:12.326961  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:12.389399  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:12.660509  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:12.661416  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:12.827205  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:12.889924  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:13.173155  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:13.193335  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:13.327794  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:13.390529  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:13.660922  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:13.663973  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:13.828973  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:13.890772  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:14.162545  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:14.164215  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:14.302502  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:14.328440  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:14.390939  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:14.667483  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:14.669071  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:14.833143  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:14.890164  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:15.161909  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:15.163440  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:15.329420  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:15.391547  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:15.660143  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:15.661685  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:15.838004  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:15.891445  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:16.159529  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:16.160957  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:16.305437  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:16.331867  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:16.389750  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:16.660365  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:16.661325  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:16.827486  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:16.889839  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:17.157944  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:17.160742  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:17.327144  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:17.389636  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:17.661365  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:17.669896  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:17.829141  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:17.890481  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:18.163111  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:18.164786  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:18.330162  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:18.392219  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:18.658420  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:18.660910  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:18.807461  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:18.833920  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:18.891244  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:19.163042  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:19.164030  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:19.329604  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:19.390097  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:19.658162  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:19.661276  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:19.829139  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:19.891756  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:20.178377  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:20.180415  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:20.329673  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:20.391270  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:20.668918  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:20.678391  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:20.828151  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:20.889929  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:21.164633  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:21.166050  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:21.302300  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:21.341311  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:21.389871  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:21.660591  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:21.662128  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:21.831065  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:21.893095  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:22.164298  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:22.165726  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:22.327068  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:22.389237  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:22.657466  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:22.658104  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:22.828944  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:22.889277  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:23.157438  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:23.157693  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:23.303518  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:23.327236  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:23.389480  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:23.658790  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:23.659411  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:23.827652  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:23.889955  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:24.159284  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:24.160378  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:24.347033  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:24.394010  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:24.667404  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:24.670265  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:24.827918  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:24.892717  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:25.166505  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:25.168906  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:25.339167  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:25.389867  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:25.674039  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:25.675858  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:25.802577  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:25.834281  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:25.890268  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:26.175438  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:26.177119  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:26.327445  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:26.396286  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:26.660805  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:26.664302  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:26.833000  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:26.891342  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:27.159275  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:27.161068  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:27.330738  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:27.394360  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:27.658743  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:27.659827  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:27.811499  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:27.829979  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:27.890592  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:28.166111  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:28.167462  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:28.329829  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:28.389859  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:28.669432  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:28.669871  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:28.826763  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:28.889762  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:29.158599  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:29.159761  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:29.327385  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:29.392550  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:29.661779  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:29.664406  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:29.830896  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:29.889561  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:30.159065  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:30.159678  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:30.301221  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:30.326953  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:30.389466  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:30.658761  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:30.659553  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:30.827228  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:30.889554  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:31.158819  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:31.158945  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:31.327879  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:31.389527  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:31.658358  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:31.659850  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:31.828073  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:31.889501  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:32.158008  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:32.159000  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:32.302219  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:32.329050  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:32.390041  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:32.683421  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:32.692725  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:32.830390  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:32.891340  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:33.165193  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:33.167126  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:33.328104  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:33.397302  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:33.662154  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:33.665068  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:33.828203  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:33.890349  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:34.160003  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:34.168209  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:34.304158  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:34.330055  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:34.389903  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:34.660072  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:34.662082  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:34.829685  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:34.892148  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:35.161886  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:35.163033  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:35.328828  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:35.390255  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:35.660141  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:35.664747  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:35.828326  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:35.889628  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:36.159578  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:36.162285  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:36.334489  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:36.390490  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:36.659837  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:36.661594  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:36.803183  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:36.828797  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:36.897896  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:37.158014  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:37.162784  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:37.339416  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:37.392139  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:37.658373  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:37.659416  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:37.829633  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:37.890055  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:38.158951  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:38.161248  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:38.327514  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:38.394894  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:38.659340  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:38.660579  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:38.827093  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:38.889133  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:39.169218  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:39.182700  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:39.302316  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:39.328189  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:39.389659  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:39.668536  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:39.670955  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:39.831502  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:39.892571  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:40.158109  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:40.159345  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:40.327194  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:40.389357  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:40.658650  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:40.659449  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:40.826904  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:40.889503  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:41.159218  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:41.160408  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:41.328140  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:41.389738  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:41.660594  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:41.661904  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:41.803028  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:41.827831  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:41.890940  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:42.161338  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:42.165688  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:42.328248  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:42.394345  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:42.664576  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:42.665998  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:42.832369  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:42.892492  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:43.159415  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:43.160752  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:43.328296  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:43.389978  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:43.661569  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:43.664065  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:43.833982  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:43.889492  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:44.161021  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:44.162279  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:44.302969  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:44.327934  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:44.390329  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:44.657778  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:44.660791  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:44.828114  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:44.890347  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:45.163851  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:45.166242  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:45.327810  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:45.390407  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:45.660965  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:45.661762  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:45.827818  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:45.889272  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:46.161543  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 00:29:46.161810  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:46.327633  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:46.390066  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:46.659087  533196 kapi.go:107] duration metric: took 1m29.506025896s to wait for kubernetes.io/minikube-addons=registry ...
	I0722 00:29:46.662320  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:46.801937  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:46.843945  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:46.889538  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:47.159663  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:47.327679  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:47.390236  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:47.659249  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:47.827970  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:47.889430  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:48.159168  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:48.327903  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:48.389603  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:48.659714  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:48.802836  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:48.828344  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:48.891214  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:49.159747  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:49.327859  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:49.389074  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:49.658956  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:49.830577  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:49.889810  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:50.159151  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:50.327138  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:50.389805  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:50.659045  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:50.806814  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:50.827683  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:50.890076  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:51.159039  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:51.330167  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:51.389985  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:51.659310  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:51.827164  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:51.889202  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:52.160051  533196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 00:29:52.329042  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:52.389411  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:52.658425  533196 kapi.go:107] duration metric: took 1m35.504456713s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0722 00:29:52.827458  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:52.889761  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:53.310911  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:53.327670  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:53.390234  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:53.829447  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:53.889922  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:54.326797  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:54.389060  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:54.828538  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:54.889457  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:55.327478  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:55.400217  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:55.802075  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:55.829543  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:55.891882  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:56.328710  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:56.389050  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:56.848067  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:56.890427  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 00:29:57.371834  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:57.391055  533196 kapi.go:107] duration metric: took 1m37.505291183s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0722 00:29:57.393062  533196 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-783853 cluster.
	I0722 00:29:57.394680  533196 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0722 00:29:57.396813  533196 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0722 00:29:57.827915  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:58.303991  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:29:58.329070  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:58.834746  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:59.327515  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:29:59.826883  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:00.307740  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:00.333622  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:00.831931  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:01.327523  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:01.829328  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:02.327979  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:02.802669  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:02.828783  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:03.328033  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:03.827261  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:04.327127  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:04.827408  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:05.301405  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:05.328051  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:05.828483  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:06.327479  533196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 00:30:06.827579  533196 kapi.go:107] duration metric: took 1m49.005991601s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0722 00:30:06.829636  533196 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0722 00:30:06.831151  533196 addons.go:510] duration metric: took 1m56.886153118s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0722 00:30:07.302665  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:09.802165  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:11.802403  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:14.302821  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:16.803394  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:19.301701  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:21.302056  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:23.302436  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:25.801224  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:28.301001  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:30.301543  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:32.301857  533196 pod_ready.go:102] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"False"
	I0722 00:30:34.300942  533196 pod_ready.go:92] pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace has status "Ready":"True"
	I0722 00:30:34.300967  533196 pod_ready.go:81] duration metric: took 1m35.005785877s for pod "metrics-server-c59844bb4-znqdq" in "kube-system" namespace to be "Ready" ...
	I0722 00:30:34.300979  533196 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jwvh7" in "kube-system" namespace to be "Ready" ...
	I0722 00:30:34.306345  533196 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jwvh7" in "kube-system" namespace has status "Ready":"True"
	I0722 00:30:34.306371  533196 pod_ready.go:81] duration metric: took 5.38443ms for pod "nvidia-device-plugin-daemonset-jwvh7" in "kube-system" namespace to be "Ready" ...
	I0722 00:30:34.306392  533196 pod_ready.go:38] duration metric: took 1m37.931338924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:30:34.306410  533196 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:30:34.306454  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:30:34.306521  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:30:34.358573  533196 cri.go:89] found id: "f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:34.358595  533196 cri.go:89] found id: ""
	I0722 00:30:34.358607  533196 logs.go:276] 1 containers: [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c]
	I0722 00:30:34.358663  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.363032  533196 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:30:34.363106  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:30:34.404043  533196 cri.go:89] found id: "c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:34.404064  533196 cri.go:89] found id: ""
	I0722 00:30:34.404072  533196 logs.go:276] 1 containers: [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe]
	I0722 00:30:34.404144  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.407487  533196 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:30:34.407587  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:30:34.451962  533196 cri.go:89] found id: "c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:34.452035  533196 cri.go:89] found id: ""
	I0722 00:30:34.452058  533196 logs.go:276] 1 containers: [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3]
	I0722 00:30:34.452146  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.455497  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:30:34.455578  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:30:34.502021  533196 cri.go:89] found id: "c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:34.502042  533196 cri.go:89] found id: ""
	I0722 00:30:34.502050  533196 logs.go:276] 1 containers: [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28]
	I0722 00:30:34.502112  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.505434  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:30:34.505506  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:30:34.545862  533196 cri.go:89] found id: "7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:34.545886  533196 cri.go:89] found id: ""
	I0722 00:30:34.545894  533196 logs.go:276] 1 containers: [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d]
	I0722 00:30:34.545966  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.549469  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:30:34.549552  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:30:34.597537  533196 cri.go:89] found id: "a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:34.597568  533196 cri.go:89] found id: ""
	I0722 00:30:34.597577  533196 logs.go:276] 1 containers: [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab]
	I0722 00:30:34.597636  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.602154  533196 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:30:34.602225  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:30:34.642444  533196 cri.go:89] found id: "f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:34.642467  533196 cri.go:89] found id: ""
	I0722 00:30:34.642480  533196 logs.go:276] 1 containers: [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b]
	I0722 00:30:34.642555  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:34.646188  533196 logs.go:123] Gathering logs for kube-apiserver [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c] ...
	I0722 00:30:34.646211  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:34.710059  533196 logs.go:123] Gathering logs for etcd [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe] ...
	I0722 00:30:34.710103  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:34.761805  533196 logs.go:123] Gathering logs for coredns [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3] ...
	I0722 00:30:34.761836  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:34.803019  533196 logs.go:123] Gathering logs for kube-scheduler [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28] ...
	I0722 00:30:34.803048  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:34.857514  533196 logs.go:123] Gathering logs for kube-controller-manager [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab] ...
	I0722 00:30:34.857545  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:34.923135  533196 logs.go:123] Gathering logs for kindnet [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b] ...
	I0722 00:30:34.923167  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:34.991801  533196 logs.go:123] Gathering logs for container status ...
	I0722 00:30:34.991832  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:30:35.054596  533196 logs.go:123] Gathering logs for dmesg ...
	I0722 00:30:35.054627  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:30:35.074620  533196 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:30:35.074647  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:30:35.242715  533196 logs.go:123] Gathering logs for kube-proxy [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d] ...
	I0722 00:30:35.242747  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:35.287096  533196 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:30:35.287124  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:30:35.377856  533196 logs.go:123] Gathering logs for kubelet ...
	I0722 00:30:35.377890  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 00:30:35.424993  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.164143    1534 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.425232  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.426332  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.426523  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.426711  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.426919  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:35.459791  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:35.459821  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 00:30:35.459873  533196 out.go:239] X Problems detected in kubelet:
	W0722 00:30:35.459881  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.459889  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.459901  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.459907  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:35.459916  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:35.459922  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:35.459927  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:30:45.461013  533196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:30:45.474972  533196 api_server.go:72] duration metric: took 2m35.530339378s to wait for apiserver process to appear ...
	I0722 00:30:45.475005  533196 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:30:45.475042  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:30:45.475106  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:30:45.513719  533196 cri.go:89] found id: "f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:45.513742  533196 cri.go:89] found id: ""
	I0722 00:30:45.513750  533196 logs.go:276] 1 containers: [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c]
	I0722 00:30:45.513808  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.517159  533196 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:30:45.517228  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:30:45.555750  533196 cri.go:89] found id: "c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:45.555769  533196 cri.go:89] found id: ""
	I0722 00:30:45.555777  533196 logs.go:276] 1 containers: [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe]
	I0722 00:30:45.555837  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.559364  533196 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:30:45.559433  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:30:45.625418  533196 cri.go:89] found id: "c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:45.625438  533196 cri.go:89] found id: ""
	I0722 00:30:45.625446  533196 logs.go:276] 1 containers: [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3]
	I0722 00:30:45.625499  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.628937  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:30:45.629056  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:30:45.670843  533196 cri.go:89] found id: "c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:45.670865  533196 cri.go:89] found id: ""
	I0722 00:30:45.670874  533196 logs.go:276] 1 containers: [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28]
	I0722 00:30:45.670927  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.674516  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:30:45.674594  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:30:45.714156  533196 cri.go:89] found id: "7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:45.714177  533196 cri.go:89] found id: ""
	I0722 00:30:45.714185  533196 logs.go:276] 1 containers: [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d]
	I0722 00:30:45.714239  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.717704  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:30:45.717777  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:30:45.758274  533196 cri.go:89] found id: "a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:45.758345  533196 cri.go:89] found id: ""
	I0722 00:30:45.758361  533196 logs.go:276] 1 containers: [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab]
	I0722 00:30:45.758426  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.761739  533196 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:30:45.761808  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:30:45.806368  533196 cri.go:89] found id: "f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:45.806390  533196 cri.go:89] found id: ""
	I0722 00:30:45.806399  533196 logs.go:276] 1 containers: [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b]
	I0722 00:30:45.806457  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:45.809877  533196 logs.go:123] Gathering logs for kube-controller-manager [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab] ...
	I0722 00:30:45.809897  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:45.881045  533196 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:30:45.881089  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:30:45.983965  533196 logs.go:123] Gathering logs for container status ...
	I0722 00:30:45.983999  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:30:46.063341  533196 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:30:46.063373  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:30:46.198117  533196 logs.go:123] Gathering logs for kube-apiserver [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c] ...
	I0722 00:30:46.198173  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:46.254454  533196 logs.go:123] Gathering logs for coredns [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3] ...
	I0722 00:30:46.254489  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:46.308015  533196 logs.go:123] Gathering logs for kube-scheduler [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28] ...
	I0722 00:30:46.308042  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:46.352117  533196 logs.go:123] Gathering logs for kube-proxy [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d] ...
	I0722 00:30:46.352148  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:46.392160  533196 logs.go:123] Gathering logs for kindnet [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b] ...
	I0722 00:30:46.392191  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:46.439139  533196 logs.go:123] Gathering logs for kubelet ...
	I0722 00:30:46.439173  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 00:30:46.473391  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.164143    1534 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.473636  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.474980  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.475176  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.475364  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.475583  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:46.518202  533196 logs.go:123] Gathering logs for dmesg ...
	I0722 00:30:46.518235  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:30:46.537984  533196 logs.go:123] Gathering logs for etcd [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe] ...
	I0722 00:30:46.538016  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:46.615715  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:46.615744  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 00:30:46.615809  533196 out.go:239] X Problems detected in kubelet:
	W0722 00:30:46.615822  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.615838  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.615846  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.615857  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:46.615864  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:46.615870  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:46.615880  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:30:56.618118  533196 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0722 00:30:56.626328  533196 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0722 00:30:56.628076  533196 api_server.go:141] control plane version: v1.30.3
	I0722 00:30:56.628103  533196 api_server.go:131] duration metric: took 11.153089459s to wait for apiserver health ...
	I0722 00:30:56.628111  533196 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:30:56.628134  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:30:56.628200  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:30:56.667151  533196 cri.go:89] found id: "f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:56.667170  533196 cri.go:89] found id: ""
	I0722 00:30:56.667179  533196 logs.go:276] 1 containers: [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c]
	I0722 00:30:56.667241  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.671089  533196 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:30:56.671165  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:30:56.710114  533196 cri.go:89] found id: "c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:56.710137  533196 cri.go:89] found id: ""
	I0722 00:30:56.710145  533196 logs.go:276] 1 containers: [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe]
	I0722 00:30:56.710201  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.713645  533196 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:30:56.713719  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:30:56.756273  533196 cri.go:89] found id: "c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:56.756297  533196 cri.go:89] found id: ""
	I0722 00:30:56.756305  533196 logs.go:276] 1 containers: [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3]
	I0722 00:30:56.756383  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.760079  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:30:56.760174  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:30:56.798045  533196 cri.go:89] found id: "c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:56.798068  533196 cri.go:89] found id: ""
	I0722 00:30:56.798076  533196 logs.go:276] 1 containers: [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28]
	I0722 00:30:56.798146  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.804204  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:30:56.804278  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:30:56.849168  533196 cri.go:89] found id: "7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:56.849192  533196 cri.go:89] found id: ""
	I0722 00:30:56.849200  533196 logs.go:276] 1 containers: [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d]
	I0722 00:30:56.849256  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.852970  533196 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:30:56.853040  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:30:56.891066  533196 cri.go:89] found id: "a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:56.891127  533196 cri.go:89] found id: ""
	I0722 00:30:56.891149  533196 logs.go:276] 1 containers: [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab]
	I0722 00:30:56.891227  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.894626  533196 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:30:56.894697  533196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:30:56.933676  533196 cri.go:89] found id: "f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:56.933698  533196 cri.go:89] found id: ""
	I0722 00:30:56.933706  533196 logs.go:276] 1 containers: [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b]
	I0722 00:30:56.933759  533196 ssh_runner.go:195] Run: which crictl
	I0722 00:30:56.936976  533196 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:30:56.937001  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:30:57.096441  533196 logs.go:123] Gathering logs for kube-apiserver [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c] ...
	I0722 00:30:57.096472  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c"
	I0722 00:30:57.199371  533196 logs.go:123] Gathering logs for kube-scheduler [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28] ...
	I0722 00:30:57.199407  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28"
	I0722 00:30:57.247914  533196 logs.go:123] Gathering logs for kube-proxy [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d] ...
	I0722 00:30:57.247947  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d"
	I0722 00:30:57.290832  533196 logs.go:123] Gathering logs for kube-controller-manager [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab] ...
	I0722 00:30:57.290859  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab"
	I0722 00:30:57.359616  533196 logs.go:123] Gathering logs for kindnet [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b] ...
	I0722 00:30:57.359652  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b"
	I0722 00:30:57.415081  533196 logs.go:123] Gathering logs for kubelet ...
	I0722 00:30:57.415111  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 00:30:57.455262  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.164143    1534 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.455499  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.456582  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.456775  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.456963  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.457171  533196 logs.go:138] Found kubelet problem: Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:57.500022  533196 logs.go:123] Gathering logs for dmesg ...
	I0722 00:30:57.500048  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:30:57.519184  533196 logs.go:123] Gathering logs for etcd [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe] ...
	I0722 00:30:57.519213  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe"
	I0722 00:30:57.567002  533196 logs.go:123] Gathering logs for coredns [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3] ...
	I0722 00:30:57.567035  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3"
	I0722 00:30:57.607684  533196 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:30:57.607713  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:30:57.710828  533196 logs.go:123] Gathering logs for container status ...
	I0722 00:30:57.710871  533196 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:30:57.766065  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:57.766100  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 00:30:57.766162  533196 out.go:239] X Problems detected in kubelet:
	W0722 00:30:57.766174  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.164181    1534 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.766191  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181056    1534 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.766202  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181092    1534 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-783853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.766217  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: W0722 00:28:56.181136    1534 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	W0722 00:30:57.766225  533196 out.go:239]   Jul 22 00:28:56 addons-783853 kubelet[1534]: E0722 00:28:56.181148    1534 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-783853" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-783853' and this object
	I0722 00:30:57.766234  533196 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:57.766239  533196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:31:07.778676  533196 system_pods.go:59] 18 kube-system pods found
	I0722 00:31:07.778717  533196 system_pods.go:61] "coredns-7db6d8ff4d-7mkbx" [23f5be8a-5c87-4784-b863-324b9a79fccf] Running
	I0722 00:31:07.778724  533196 system_pods.go:61] "csi-hostpath-attacher-0" [87434606-a156-4af9-89c7-87f1b925aa18] Running
	I0722 00:31:07.778728  533196 system_pods.go:61] "csi-hostpath-resizer-0" [06dcad40-139c-4450-85ac-0c181a0c4ba8] Running
	I0722 00:31:07.778733  533196 system_pods.go:61] "csi-hostpathplugin-kn5st" [2bb4c17d-23bf-4aa7-a4c5-c61ccc25cd62] Running
	I0722 00:31:07.778772  533196 system_pods.go:61] "etcd-addons-783853" [db67dcf2-4601-498e-ab87-4d6b347e968a] Running
	I0722 00:31:07.778777  533196 system_pods.go:61] "kindnet-cdpvw" [5c2685c2-cf4b-4dc1-b2ce-407adb3e4b65] Running
	I0722 00:31:07.778784  533196 system_pods.go:61] "kube-apiserver-addons-783853" [8ec42fcb-ab7b-4b3b-a6c1-ee832cb2d96c] Running
	I0722 00:31:07.778788  533196 system_pods.go:61] "kube-controller-manager-addons-783853" [43234280-4e30-42a9-a39f-3ecf1ab25a34] Running
	I0722 00:31:07.778803  533196 system_pods.go:61] "kube-ingress-dns-minikube" [4f67e797-baba-4022-b2e9-f969cb82f4fb] Running
	I0722 00:31:07.778807  533196 system_pods.go:61] "kube-proxy-v7srs" [504b64d6-49a4-472b-9ede-45723f69fab1] Running
	I0722 00:31:07.778812  533196 system_pods.go:61] "kube-scheduler-addons-783853" [dc81eee1-f262-4ca8-8856-f56e30661a00] Running
	I0722 00:31:07.778818  533196 system_pods.go:61] "metrics-server-c59844bb4-znqdq" [3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362] Running
	I0722 00:31:07.778831  533196 system_pods.go:61] "nvidia-device-plugin-daemonset-jwvh7" [03f22a4c-c638-40a2-8a03-0b0770a62063] Running
	I0722 00:31:07.778836  533196 system_pods.go:61] "registry-656c9c8d9c-m9wqh" [d562888d-bd3c-4b3f-9adc-aea340501248] Running
	I0722 00:31:07.778840  533196 system_pods.go:61] "registry-proxy-qs2hs" [c0bbfdb6-7c30-4635-b7e5-b3509185506d] Running
	I0722 00:31:07.778844  533196 system_pods.go:61] "snapshot-controller-745499f584-9cqss" [118575a6-1b12-4aa7-bc7d-83e150ed8d0a] Running
	I0722 00:31:07.778847  533196 system_pods.go:61] "snapshot-controller-745499f584-b6v2r" [cdd8b6ed-74e6-4df0-84eb-3c0a7fd51c86] Running
	I0722 00:31:07.778852  533196 system_pods.go:61] "storage-provisioner" [13a8c1f3-5cee-4d0a-bd3a-3611f982b615] Running
	I0722 00:31:07.778860  533196 system_pods.go:74] duration metric: took 11.150742375s to wait for pod list to return data ...
	I0722 00:31:07.778872  533196 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:31:07.781242  533196 default_sa.go:45] found service account: "default"
	I0722 00:31:07.781269  533196 default_sa.go:55] duration metric: took 2.389438ms for default service account to be created ...
	I0722 00:31:07.781279  533196 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:31:07.791027  533196 system_pods.go:86] 18 kube-system pods found
	I0722 00:31:07.791065  533196 system_pods.go:89] "coredns-7db6d8ff4d-7mkbx" [23f5be8a-5c87-4784-b863-324b9a79fccf] Running
	I0722 00:31:07.791073  533196 system_pods.go:89] "csi-hostpath-attacher-0" [87434606-a156-4af9-89c7-87f1b925aa18] Running
	I0722 00:31:07.791078  533196 system_pods.go:89] "csi-hostpath-resizer-0" [06dcad40-139c-4450-85ac-0c181a0c4ba8] Running
	I0722 00:31:07.791082  533196 system_pods.go:89] "csi-hostpathplugin-kn5st" [2bb4c17d-23bf-4aa7-a4c5-c61ccc25cd62] Running
	I0722 00:31:07.791087  533196 system_pods.go:89] "etcd-addons-783853" [db67dcf2-4601-498e-ab87-4d6b347e968a] Running
	I0722 00:31:07.791093  533196 system_pods.go:89] "kindnet-cdpvw" [5c2685c2-cf4b-4dc1-b2ce-407adb3e4b65] Running
	I0722 00:31:07.791097  533196 system_pods.go:89] "kube-apiserver-addons-783853" [8ec42fcb-ab7b-4b3b-a6c1-ee832cb2d96c] Running
	I0722 00:31:07.791102  533196 system_pods.go:89] "kube-controller-manager-addons-783853" [43234280-4e30-42a9-a39f-3ecf1ab25a34] Running
	I0722 00:31:07.791107  533196 system_pods.go:89] "kube-ingress-dns-minikube" [4f67e797-baba-4022-b2e9-f969cb82f4fb] Running
	I0722 00:31:07.791112  533196 system_pods.go:89] "kube-proxy-v7srs" [504b64d6-49a4-472b-9ede-45723f69fab1] Running
	I0722 00:31:07.791116  533196 system_pods.go:89] "kube-scheduler-addons-783853" [dc81eee1-f262-4ca8-8856-f56e30661a00] Running
	I0722 00:31:07.791123  533196 system_pods.go:89] "metrics-server-c59844bb4-znqdq" [3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362] Running
	I0722 00:31:07.791128  533196 system_pods.go:89] "nvidia-device-plugin-daemonset-jwvh7" [03f22a4c-c638-40a2-8a03-0b0770a62063] Running
	I0722 00:31:07.791135  533196 system_pods.go:89] "registry-656c9c8d9c-m9wqh" [d562888d-bd3c-4b3f-9adc-aea340501248] Running
	I0722 00:31:07.791139  533196 system_pods.go:89] "registry-proxy-qs2hs" [c0bbfdb6-7c30-4635-b7e5-b3509185506d] Running
	I0722 00:31:07.791143  533196 system_pods.go:89] "snapshot-controller-745499f584-9cqss" [118575a6-1b12-4aa7-bc7d-83e150ed8d0a] Running
	I0722 00:31:07.791148  533196 system_pods.go:89] "snapshot-controller-745499f584-b6v2r" [cdd8b6ed-74e6-4df0-84eb-3c0a7fd51c86] Running
	I0722 00:31:07.791155  533196 system_pods.go:89] "storage-provisioner" [13a8c1f3-5cee-4d0a-bd3a-3611f982b615] Running
	I0722 00:31:07.791162  533196 system_pods.go:126] duration metric: took 9.876648ms to wait for k8s-apps to be running ...
	I0722 00:31:07.791172  533196 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:31:07.791232  533196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:31:07.803788  533196 system_svc.go:56] duration metric: took 12.605404ms WaitForService to wait for kubelet
	I0722 00:31:07.803816  533196 kubeadm.go:582] duration metric: took 2m57.859188435s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:31:07.803838  533196 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:31:07.807620  533196 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0722 00:31:07.807656  533196 node_conditions.go:123] node cpu capacity is 2
	I0722 00:31:07.807669  533196 node_conditions.go:105] duration metric: took 3.825805ms to run NodePressure ...
	I0722 00:31:07.807682  533196 start.go:241] waiting for startup goroutines ...
	I0722 00:31:07.807693  533196 start.go:246] waiting for cluster config update ...
	I0722 00:31:07.807712  533196 start.go:255] writing updated cluster config ...
	I0722 00:31:07.807994  533196 ssh_runner.go:195] Run: rm -f paused
	I0722 00:31:08.136617  533196 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:31:08.139489  533196 out.go:177] * Done! kubectl is now configured to use "addons-783853" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 22 00:34:58 addons-783853 crio[964]: time="2024-07-22 00:34:58.376447377Z" level=info msg="Stopping container: c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661 (timeout: 2s)" id=49ce4a25-0b03-4ef2-a26a-1b041878d9c5 name=/runtime.v1.RuntimeService/StopContainer
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.383270972Z" level=warning msg="Stopping container c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=49ce4a25-0b03-4ef2-a26a-1b041878d9c5 name=/runtime.v1.RuntimeService/StopContainer
	Jul 22 00:35:00 addons-783853 conmon[4677]: conmon c461369c1e9544a7a1ee <ninfo>: container 4688 exited with status 137
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.526734025Z" level=info msg="Stopped container c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661: ingress-nginx/ingress-nginx-controller-6d9bd977d4-g7h89/controller" id=49ce4a25-0b03-4ef2-a26a-1b041878d9c5 name=/runtime.v1.RuntimeService/StopContainer
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.527432586Z" level=info msg="Stopping pod sandbox: 5cfde4c0b882c55e481b5b56cc6e52596ed3f9726559b58ec45b944bb4344f48" id=5708815c-cdb4-4e83-9767-775bf18df93f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.531129446Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-FU3MY5DUL3XMDUCS - [0:0]\n:KUBE-HP-L4MDPUJWJ77FKVLS - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-FU3MY5DUL3XMDUCS\n-X KUBE-HP-L4MDPUJWJ77FKVLS\nCOMMIT\n"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.532693536Z" level=info msg="Closing host port tcp:80"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.532792442Z" level=info msg="Closing host port tcp:443"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.534355087Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.534388064Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.534596707Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-6d9bd977d4-g7h89 Namespace:ingress-nginx ID:5cfde4c0b882c55e481b5b56cc6e52596ed3f9726559b58ec45b944bb4344f48 UID:47b77f41-f681-441d-bf14-c37ac84b670d NetNS:/var/run/netns/ef2ae07f-657d-49fc-85bc-d487b97ea862 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.534876136Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-6d9bd977d4-g7h89 from CNI network \"kindnet\" (type=ptp)"
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.562635754Z" level=info msg="Stopped pod sandbox: 5cfde4c0b882c55e481b5b56cc6e52596ed3f9726559b58ec45b944bb4344f48" id=5708815c-cdb4-4e83-9767-775bf18df93f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.615193837Z" level=info msg="Removing container: c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661" id=09b7a131-e742-4a1b-ada9-79c15cb0bc50 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 22 00:35:00 addons-783853 crio[964]: time="2024-07-22 00:35:00.629450881Z" level=info msg="Removed container c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661: ingress-nginx/ingress-nginx-controller-6d9bd977d4-g7h89/controller" id=09b7a131-e742-4a1b-ada9-79c15cb0bc50 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 22 00:35:58 addons-783853 crio[964]: time="2024-07-22 00:35:58.289454758Z" level=info msg="Stopping pod sandbox: 5cfde4c0b882c55e481b5b56cc6e52596ed3f9726559b58ec45b944bb4344f48" id=8643f377-2d86-4d82-9e79-fdbb16f688b7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 22 00:35:58 addons-783853 crio[964]: time="2024-07-22 00:35:58.289504572Z" level=info msg="Stopped pod sandbox (already stopped): 5cfde4c0b882c55e481b5b56cc6e52596ed3f9726559b58ec45b944bb4344f48" id=8643f377-2d86-4d82-9e79-fdbb16f688b7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 22 00:35:58 addons-783853 crio[964]: time="2024-07-22 00:35:58.290082344Z" level=info msg="Removing pod sandbox: 5cfde4c0b882c55e481b5b56cc6e52596ed3f9726559b58ec45b944bb4344f48" id=be6d4cfa-7498-4881-95da-acdfce8bba2d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 22 00:35:58 addons-783853 crio[964]: time="2024-07-22 00:35:58.298384916Z" level=info msg="Removed pod sandbox: 5cfde4c0b882c55e481b5b56cc6e52596ed3f9726559b58ec45b944bb4344f48" id=be6d4cfa-7498-4881-95da-acdfce8bba2d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 22 00:37:08 addons-783853 crio[964]: time="2024-07-22 00:37:08.548833735Z" level=info msg="Stopping container: edcd40174a08804c61cf34ee73700d7e681a6fead373db9a26f5a778d396cbc0 (timeout: 30s)" id=5ccdb58f-ccf8-4ccf-931d-210bab33f10c name=/runtime.v1.RuntimeService/StopContainer
	Jul 22 00:37:09 addons-783853 crio[964]: time="2024-07-22 00:37:09.730053600Z" level=info msg="Stopped container edcd40174a08804c61cf34ee73700d7e681a6fead373db9a26f5a778d396cbc0: kube-system/metrics-server-c59844bb4-znqdq/metrics-server" id=5ccdb58f-ccf8-4ccf-931d-210bab33f10c name=/runtime.v1.RuntimeService/StopContainer
	Jul 22 00:37:09 addons-783853 crio[964]: time="2024-07-22 00:37:09.730987905Z" level=info msg="Stopping pod sandbox: 499ced9a5bc9b0b92219b811b5fbb43446f3a4036e17102f4272b4573abfde25" id=8f9e147e-c388-41cc-8bd1-a2a504d7a87d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 22 00:37:09 addons-783853 crio[964]: time="2024-07-22 00:37:09.731208577Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-znqdq Namespace:kube-system ID:499ced9a5bc9b0b92219b811b5fbb43446f3a4036e17102f4272b4573abfde25 UID:3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362 NetNS:/var/run/netns/42b5007e-3d4e-4f68-8c60-0354deea20fd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 22 00:37:09 addons-783853 crio[964]: time="2024-07-22 00:37:09.731344283Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-znqdq from CNI network \"kindnet\" (type=ptp)"
	Jul 22 00:37:09 addons-783853 crio[964]: time="2024-07-22 00:37:09.782987991Z" level=info msg="Stopped pod sandbox: 499ced9a5bc9b0b92219b811b5fbb43446f3a4036e17102f4272b4573abfde25" id=8f9e147e-c388-41cc-8bd1-a2a504d7a87d name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b9526602d9ad3       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6          2 minutes ago       Running             hello-world-app           0                   82db09d763349       hello-world-app-6778b5fc9f-pl4h9
	004ab4d184e30       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                4 minutes ago       Running             nginx                     0                   bdf9029e99990       nginx
	dd563aa5d9bbe       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37          5 minutes ago       Running             headlamp                  0                   5ec0375dcb002       headlamp-7867546754-hbj4f
	bb90afc7e859b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69   7 minutes ago       Running             gcp-auth                  0                   cfa15e5448e8b       gcp-auth-5db96cd9b4-mh6ws
	d6ca537fc472a       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                7 minutes ago       Running             yakd                      0                   17d510e84d6d7       yakd-dashboard-799879c74f-7hmg4
	c568a897c0879       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                               8 minutes ago       Running             coredns                   0                   601cf99f8fad6       coredns-7db6d8ff4d-7mkbx
	461123cde6927       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                               8 minutes ago       Running             storage-provisioner       0                   14df73512b59b       storage-provisioner
	f1d9ff424c7f6       docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a             8 minutes ago       Running             kindnet-cni               0                   330306786f6bf       kindnet-cdpvw
	7ce7a71ddc6cb       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                               8 minutes ago       Running             kube-proxy                0                   b413ec036bbf9       kube-proxy-v7srs
	c3ad375225f40       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                               9 minutes ago       Running             etcd                      0                   f9984e78264dd       etcd-addons-783853
	f0031f14ce88c       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                               9 minutes ago       Running             kube-apiserver            0                   425237d385aa6       kube-apiserver-addons-783853
	c4a0894f7c861       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                               9 minutes ago       Running             kube-scheduler            0                   61654809120f4       kube-scheduler-addons-783853
	a3d74c472e3cd       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                               9 minutes ago       Running             kube-controller-manager   0                   39daf1ed56afa       kube-controller-manager-addons-783853
	
	
	==> coredns [c568a897c087948f89e3e12e04c3d8b6b650085ec5e275a4c3b6e76f87a1f0f3] <==
	[INFO] 10.244.0.18:53704 - 64062 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003233617s
	[INFO] 10.244.0.18:56435 - 64781 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000939852s
	[INFO] 10.244.0.18:56435 - 41472 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0009042s
	[INFO] 10.244.0.18:56276 - 62664 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000117359s
	[INFO] 10.244.0.18:56276 - 3319 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000048435s
	[INFO] 10.244.0.18:35844 - 22610 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056337s
	[INFO] 10.244.0.18:35844 - 47696 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034552s
	[INFO] 10.244.0.18:59693 - 7615 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055115s
	[INFO] 10.244.0.18:59693 - 41146 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000032961s
	[INFO] 10.244.0.18:45044 - 20384 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00164986s
	[INFO] 10.244.0.18:45044 - 50339 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001574872s
	[INFO] 10.244.0.18:51242 - 39676 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007292s
	[INFO] 10.244.0.18:51242 - 22015 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126131s
	[INFO] 10.244.0.20:36704 - 32684 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147317s
	[INFO] 10.244.0.20:42870 - 38273 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000072058s
	[INFO] 10.244.0.20:41244 - 61005 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084481s
	[INFO] 10.244.0.20:58722 - 8117 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000066881s
	[INFO] 10.244.0.20:46881 - 24427 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077621s
	[INFO] 10.244.0.20:58943 - 51881 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00006094s
	[INFO] 10.244.0.20:42125 - 37719 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002727657s
	[INFO] 10.244.0.20:44897 - 7940 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002444257s
	[INFO] 10.244.0.20:36993 - 13633 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000844146s
	[INFO] 10.244.0.20:39896 - 15207 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000778832s
	[INFO] 10.244.0.22:45649 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000201463s
	[INFO] 10.244.0.22:41238 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012773s
	
	
	==> describe nodes <==
	Name:               addons-783853
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-783853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=addons-783853
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_27_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-783853
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:27:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-783853
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:37:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:35:33 +0000   Mon, 22 Jul 2024 00:27:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:35:33 +0000   Mon, 22 Jul 2024 00:27:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:35:33 +0000   Mon, 22 Jul 2024 00:27:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:35:33 +0000   Mon, 22 Jul 2024 00:28:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-783853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f136131046143669c7ae750f1c3a238
	  System UUID:                a87d4cf7-5057-4542-ac60-0e7b432e998b
	  Boot ID:                    7a479143-663f-4f08-926c-92bb931337b4
	  Kernel Version:             5.15.0-1064-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-pl4h9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  gcp-auth                    gcp-auth-5db96cd9b4-mh6ws                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m51s
	  headlamp                    headlamp-7867546754-hbj4f                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 coredns-7db6d8ff4d-7mkbx                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m1s
	  kube-system                 etcd-addons-783853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m14s
	  kube-system                 kindnet-cdpvw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m1s
	  kube-system                 kube-apiserver-addons-783853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-controller-manager-addons-783853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 kube-proxy-v7srs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	  kube-system                 kube-scheduler-addons-783853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m55s
	  yakd-dashboard              yakd-dashboard-799879c74f-7hmg4          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     8m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m54s  kube-proxy       
	  Normal  Starting                 9m14s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m14s  kubelet          Node addons-783853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m14s  kubelet          Node addons-783853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m14s  kubelet          Node addons-783853 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m2s   node-controller  Node addons-783853 event: Registered Node addons-783853 in Controller
	  Normal  NodeReady                8m14s  kubelet          Node addons-783853 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000788] FS-Cache: N-cookie c=0000012c [p=00000123 fl=2 nc=0 na=1]
	[  +0.000974] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=00000000f512889c
	[  +0.001085] FS-Cache: N-key=[8] 'e17a3b0000000000'
	[  +0.002775] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000126 [p=00000123 fl=226 nc=0 na=1]
	[  +0.001001] FS-Cache: O-cookie d=00000000656be40d{9p.inode} n=00000000325d45d1
	[  +0.001152] FS-Cache: O-key=[8] 'e17a3b0000000000'
	[  +0.000783] FS-Cache: N-cookie c=0000012d [p=00000123 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=00000000d6f0730b
	[  +0.001094] FS-Cache: N-key=[8] 'e17a3b0000000000'
	[  +2.326910] FS-Cache: Duplicate cookie detected
	[  +0.000819] FS-Cache: O-cookie c=00000124 [p=00000123 fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=00000000656be40d{9p.inode} n=000000004360beb1
	[  +0.001189] FS-Cache: O-key=[8] 'e07a3b0000000000'
	[  +0.000792] FS-Cache: N-cookie c=0000012f [p=00000123 fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=00000000b0a9a241
	[  +0.001150] FS-Cache: N-key=[8] 'e07a3b0000000000'
	[  +0.313505] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000129 [p=00000123 fl=226 nc=0 na=1]
	[  +0.000999] FS-Cache: O-cookie d=00000000656be40d{9p.inode} n=00000000305cdfe4
	[  +0.001117] FS-Cache: O-key=[8] 'e67a3b0000000000'
	[  +0.000728] FS-Cache: N-cookie c=00000130 [p=00000123 fl=2 nc=0 na=1]
	[  +0.000967] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=000000002ccbfa05
	[  +0.001083] FS-Cache: N-key=[8] 'e67a3b0000000000'
	[Jul22 00:00] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [c3ad375225f4082c1381bcc1bffc025d3226d530d6add12670b79ab3468fb8fe] <==
	{"level":"info","ts":"2024-07-22T00:27:50.74877Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:27:50.748849Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:27:50.74475Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:27:50.744867Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:27:50.749202Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:27:50.74928Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:27:50.750785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-22T00:28:11.723125Z","caller":"traceutil/trace.go:171","msg":"trace[16347025] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"115.038155ms","start":"2024-07-22T00:28:11.608071Z","end":"2024-07-22T00:28:11.723109Z","steps":["trace[16347025] 'process raft request'  (duration: 114.924447ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:28:14.043432Z","caller":"traceutil/trace.go:171","msg":"trace[658452004] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"159.858354ms","start":"2024-07-22T00:28:13.883556Z","end":"2024-07-22T00:28:14.043415Z","steps":["trace[658452004] 'process raft request'  (duration: 159.700888ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:28:14.045187Z","caller":"traceutil/trace.go:171","msg":"trace[1874793671] linearizableReadLoop","detail":"{readStateIndex:394; appliedIndex:394; }","duration":"127.978644ms","start":"2024-07-22T00:28:13.917194Z","end":"2024-07-22T00:28:14.045173Z","steps":["trace[1874793671] 'read index received'  (duration: 127.824214ms)","trace[1874793671] 'applied index is now lower than readState.Index'  (duration: 153.339µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T00:28:14.053102Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.237998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-22T00:28:14.053227Z","caller":"traceutil/trace.go:171","msg":"trace[1965193589] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:382; }","duration":"152.381591ms","start":"2024-07-22T00:28:13.900831Z","end":"2024-07-22T00:28:14.053213Z","steps":["trace[1965193589] 'agreement among raft nodes before linearized reading'  (duration: 152.137034ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:28:14.121592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.510252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-22T00:28:14.121739Z","caller":"traceutil/trace.go:171","msg":"trace[1510974284] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:383; }","duration":"194.666684ms","start":"2024-07-22T00:28:13.927058Z","end":"2024-07-22T00:28:14.121725Z","steps":["trace[1510974284] 'agreement among raft nodes before linearized reading'  (duration: 194.478924ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:28:14.122124Z","caller":"traceutil/trace.go:171","msg":"trace[897956857] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"198.797486ms","start":"2024-07-22T00:28:13.923317Z","end":"2024-07-22T00:28:14.122114Z","steps":["trace[897956857] 'process raft request'  (duration: 198.054355ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:28:14.122319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.019145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3145"}
	{"level":"info","ts":"2024-07-22T00:28:14.122378Z","caller":"traceutil/trace.go:171","msg":"trace[655934887] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:383; }","duration":"110.081907ms","start":"2024-07-22T00:28:14.012289Z","end":"2024-07-22T00:28:14.122371Z","steps":["trace[655934887] 'agreement among raft nodes before linearized reading'  (duration: 109.991871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:28:14.122533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.274384ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-22T00:28:14.122586Z","caller":"traceutil/trace.go:171","msg":"trace[2081242360] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:383; }","duration":"115.328104ms","start":"2024-07-22T00:28:14.007251Z","end":"2024-07-22T00:28:14.12258Z","steps":["trace[2081242360] 'agreement among raft nodes before linearized reading'  (duration: 115.253616ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:28:14.122675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.468782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-22T00:28:14.12272Z","caller":"traceutil/trace.go:171","msg":"trace[2025422303] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:383; }","duration":"115.513944ms","start":"2024-07-22T00:28:14.007201Z","end":"2024-07-22T00:28:14.122715Z","steps":["trace[2025422303] 'agreement among raft nodes before linearized reading'  (duration: 115.45549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:28:14.122808Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.910909ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-22T00:28:14.122855Z","caller":"traceutil/trace.go:171","msg":"trace[1337862840] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:383; }","duration":"116.957835ms","start":"2024-07-22T00:28:14.005889Z","end":"2024-07-22T00:28:14.122847Z","steps":["trace[1337862840] 'agreement among raft nodes before linearized reading'  (duration: 116.896139ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:28:14.438867Z","caller":"traceutil/trace.go:171","msg":"trace[2027075324] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"105.931741ms","start":"2024-07-22T00:28:14.332917Z","end":"2024-07-22T00:28:14.438849Z","steps":["trace[2027075324] 'process raft request'  (duration: 52.087976ms)","trace[2027075324] 'compare'  (duration: 53.476696ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T00:28:14.462054Z","caller":"traceutil/trace.go:171","msg":"trace[1370139546] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"113.326462ms","start":"2024-07-22T00:28:14.348703Z","end":"2024-07-22T00:28:14.462029Z","steps":["trace[1370139546] 'process raft request'  (duration: 89.881124ms)"],"step_count":1}
	
	
	==> gcp-auth [bb90afc7e859bec3c7d9d17676acc458e87de8f1922ef9b68d3f30354d7cc83e] <==
	2024/07/22 00:29:56 GCP Auth Webhook started!
	2024/07/22 00:31:09 Ready to marshal response ...
	2024/07/22 00:31:09 Ready to write response ...
	2024/07/22 00:31:09 Ready to marshal response ...
	2024/07/22 00:31:09 Ready to write response ...
	2024/07/22 00:31:09 Ready to marshal response ...
	2024/07/22 00:31:09 Ready to write response ...
	2024/07/22 00:31:18 Ready to marshal response ...
	2024/07/22 00:31:18 Ready to write response ...
	2024/07/22 00:31:25 Ready to marshal response ...
	2024/07/22 00:31:25 Ready to write response ...
	2024/07/22 00:31:25 Ready to marshal response ...
	2024/07/22 00:31:25 Ready to write response ...
	2024/07/22 00:31:34 Ready to marshal response ...
	2024/07/22 00:31:34 Ready to write response ...
	2024/07/22 00:31:53 Ready to marshal response ...
	2024/07/22 00:31:53 Ready to write response ...
	2024/07/22 00:32:16 Ready to marshal response ...
	2024/07/22 00:32:16 Ready to write response ...
	2024/07/22 00:32:33 Ready to marshal response ...
	2024/07/22 00:32:33 Ready to write response ...
	2024/07/22 00:34:55 Ready to marshal response ...
	2024/07/22 00:34:55 Ready to write response ...
	
	
	==> kernel <==
	 00:37:10 up 1 day,  8:19,  0 users,  load average: 0.10, 0.76, 1.83
	Linux addons-783853 5.15.0-1064-aws #70~20.04.1-Ubuntu SMP Thu Jun 27 14:52:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f1d9ff424c7f69ba15807e44e6a7c5c0ba5b9a853caae673f0fdeaeebed3be9b] <==
	E0722 00:36:00.429763       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:36:02.396855       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0722 00:36:02.396898       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0722 00:36:05.727286       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:36:05.727326       1 main.go:299] handling current node
	W0722 00:36:14.269513       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:36:14.269545       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0722 00:36:15.726734       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:36:15.726771       1 main.go:299] handling current node
	I0722 00:36:25.726724       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:36:25.726763       1 main.go:299] handling current node
	W0722 00:36:32.935415       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:36:32.935446       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0722 00:36:35.726643       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:36:35.726683       1 main.go:299] handling current node
	W0722 00:36:44.264922       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0722 00:36:44.264957       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0722 00:36:45.726639       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:36:45.726753       1 main.go:299] handling current node
	W0722 00:36:47.478957       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:36:47.478991       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0722 00:36:55.726819       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:36:55.726859       1 main.go:299] handling current node
	I0722 00:37:05.726902       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:37:05.726937       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f0031f14ce88c73e0fde07c720ae3bda67a72d92f4a3b4b868a1f8cce8dd9c7c] <==
	W0722 00:30:39.192149       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 00:30:39.192198       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 00:30:39.241274       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0722 00:31:09.055537       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.140.170"}
	E0722 00:31:50.159861       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0722 00:32:05.472420       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0722 00:32:24.066445       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0722 00:32:25.099123       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0722 00:32:32.935015       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 00:32:32.935069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 00:32:32.975669       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 00:32:32.975804       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 00:32:32.977693       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 00:32:32.977800       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 00:32:32.989517       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 00:32:32.990048       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 00:32:33.026234       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 00:32:33.026373       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 00:32:33.570050       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0722 00:32:33.900686       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.130.95"}
	W0722 00:32:33.978051       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0722 00:32:34.026880       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0722 00:32:34.044770       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0722 00:34:55.517712       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.191.50"}
	
	
	==> kube-controller-manager [a3d74c472e3cde86022c2932bd1ff2d3c5f43bf25cb2a271a135255daeb96bab] <==
	I0722 00:34:57.623165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="30.442µs"
	I0722 00:35:07.462551       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0722 00:35:15.680620       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:35:15.680659       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:35:18.749699       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:35:18.749746       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:35:27.830615       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:35:27.830651       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:35:53.308187       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:35:53.308230       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:35:54.967475       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:35:54.967524       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:36:00.154318       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:36:00.154475       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:36:27.088645       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:36:27.088801       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:36:36.458220       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:36:36.458266       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:36:38.714250       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:36:38.714286       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:36:43.714742       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:36:43.714779       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 00:37:05.682455       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 00:37:05.682590       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0722 00:37:08.530758       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="4.144µs"
	
	
	==> kube-proxy [7ce7a71ddc6cb50f3c752bd7831f5e8b71ced102800c74beb5991dd02059a85d] <==
	I0722 00:28:15.248192       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:28:15.496104       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0722 00:28:15.575735       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0722 00:28:15.575791       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:28:15.776890       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0722 00:28:15.776928       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0722 00:28:15.776958       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:28:15.777188       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:28:15.777213       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:28:15.794390       1 config.go:192] "Starting service config controller"
	I0722 00:28:15.794495       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:28:15.794562       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:28:15.794595       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:28:15.795098       1 config.go:319] "Starting node config controller"
	I0722 00:28:15.796848       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:28:15.896844       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:28:15.898446       1 shared_informer.go:320] Caches are synced for node config
	I0722 00:28:15.898477       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [c4a0894f7c861c1e276bac386f5163af96016696df4161bc3108f8a961019a28] <==
	W0722 00:27:53.792874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:27:53.792972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 00:27:53.793050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:27:53.793062       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:27:53.793132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:27:53.793145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:27:53.793183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 00:27:53.793194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 00:27:54.675234       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 00:27:54.675368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0722 00:27:54.751306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 00:27:54.751348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 00:27:54.757602       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 00:27:54.757711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 00:27:54.787887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 00:27:54.787938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 00:27:54.853623       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:27:54.853754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 00:27:54.855824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 00:27:54.855956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 00:27:54.866225       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 00:27:54.866349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 00:27:55.109686       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:27:55.109815       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0722 00:27:57.088211       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 00:34:58 addons-783853 kubelet[1534]: I0722 00:34:58.146424    1534 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5c5341b-a507-4a94-b301-ac25d5f9d4ed" path="/var/lib/kubelet/pods/f5c5341b-a507-4a94-b301-ac25d5f9d4ed/volumes"
	Jul 22 00:34:58 addons-783853 kubelet[1534]: I0722 00:34:58.223093    1534 scope.go:117] "RemoveContainer" containerID="5debcbae3deb69cf2e0876e541415e1d1af6c5e5129b933207b1cf7a5757a849"
	Jul 22 00:34:58 addons-783853 kubelet[1534]: I0722 00:34:58.239675    1534 scope.go:117] "RemoveContainer" containerID="de4f416bafa9c8cb936d99382894bf2c8dff817756b430ed7c220e637dcd92a3"
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.613415    1534 scope.go:117] "RemoveContainer" containerID="c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661"
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.629728    1534 scope.go:117] "RemoveContainer" containerID="c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661"
	Jul 22 00:35:00 addons-783853 kubelet[1534]: E0722 00:35:00.630315    1534 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661\": container with ID starting with c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661 not found: ID does not exist" containerID="c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661"
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.630360    1534 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661"} err="failed to get container status \"c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661\": rpc error: code = NotFound desc = could not find container \"c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661\": container with ID starting with c461369c1e9544a7a1eedc41240b0a87e6e63c878c7a8045b8c202ac4d5c7661 not found: ID does not exist"
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.641726    1534 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47b77f41-f681-441d-bf14-c37ac84b670d-webhook-cert\") pod \"47b77f41-f681-441d-bf14-c37ac84b670d\" (UID: \"47b77f41-f681-441d-bf14-c37ac84b670d\") "
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.641786    1534 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz8pm\" (UniqueName: \"kubernetes.io/projected/47b77f41-f681-441d-bf14-c37ac84b670d-kube-api-access-wz8pm\") pod \"47b77f41-f681-441d-bf14-c37ac84b670d\" (UID: \"47b77f41-f681-441d-bf14-c37ac84b670d\") "
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.644064    1534 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b77f41-f681-441d-bf14-c37ac84b670d-kube-api-access-wz8pm" (OuterVolumeSpecName: "kube-api-access-wz8pm") pod "47b77f41-f681-441d-bf14-c37ac84b670d" (UID: "47b77f41-f681-441d-bf14-c37ac84b670d"). InnerVolumeSpecName "kube-api-access-wz8pm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.646837    1534 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b77f41-f681-441d-bf14-c37ac84b670d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "47b77f41-f681-441d-bf14-c37ac84b670d" (UID: "47b77f41-f681-441d-bf14-c37ac84b670d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.742638    1534 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47b77f41-f681-441d-bf14-c37ac84b670d-webhook-cert\") on node \"addons-783853\" DevicePath \"\""
	Jul 22 00:35:00 addons-783853 kubelet[1534]: I0722 00:35:00.742679    1534 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wz8pm\" (UniqueName: \"kubernetes.io/projected/47b77f41-f681-441d-bf14-c37ac84b670d-kube-api-access-wz8pm\") on node \"addons-783853\" DevicePath \"\""
	Jul 22 00:35:02 addons-783853 kubelet[1534]: I0722 00:35:02.146060    1534 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47b77f41-f681-441d-bf14-c37ac84b670d" path="/var/lib/kubelet/pods/47b77f41-f681-441d-bf14-c37ac84b670d/volumes"
	Jul 22 00:37:08 addons-783853 kubelet[1534]: I0722 00:37:08.547230    1534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-pl4h9" podStartSLOduration=132.396343396 podStartE2EDuration="2m13.5472112s" podCreationTimestamp="2024-07-22 00:34:55 +0000 UTC" firstStartedPulling="2024-07-22 00:34:55.677773796 +0000 UTC m=+419.659718439" lastFinishedPulling="2024-07-22 00:34:56.8286416 +0000 UTC m=+420.810586243" observedRunningTime="2024-07-22 00:34:57.616089942 +0000 UTC m=+421.598034585" watchObservedRunningTime="2024-07-22 00:37:08.5472112 +0000 UTC m=+552.529155851"
	Jul 22 00:37:09 addons-783853 kubelet[1534]: I0722 00:37:09.870383    1534 scope.go:117] "RemoveContainer" containerID="edcd40174a08804c61cf34ee73700d7e681a6fead373db9a26f5a778d396cbc0"
	Jul 22 00:37:09 addons-783853 kubelet[1534]: I0722 00:37:09.897068    1534 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnf8r\" (UniqueName: \"kubernetes.io/projected/3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362-kube-api-access-qnf8r\") pod \"3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362\" (UID: \"3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362\") "
	Jul 22 00:37:09 addons-783853 kubelet[1534]: I0722 00:37:09.897155    1534 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362-tmp-dir\") pod \"3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362\" (UID: \"3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362\") "
	Jul 22 00:37:09 addons-783853 kubelet[1534]: I0722 00:37:09.897494    1534 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362" (UID: "3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 22 00:37:09 addons-783853 kubelet[1534]: I0722 00:37:09.903232    1534 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362-kube-api-access-qnf8r" (OuterVolumeSpecName: "kube-api-access-qnf8r") pod "3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362" (UID: "3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362"). InnerVolumeSpecName "kube-api-access-qnf8r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 22 00:37:09 addons-783853 kubelet[1534]: I0722 00:37:09.914220    1534 scope.go:117] "RemoveContainer" containerID="edcd40174a08804c61cf34ee73700d7e681a6fead373db9a26f5a778d396cbc0"
	Jul 22 00:37:09 addons-783853 kubelet[1534]: E0722 00:37:09.914707    1534 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edcd40174a08804c61cf34ee73700d7e681a6fead373db9a26f5a778d396cbc0\": container with ID starting with edcd40174a08804c61cf34ee73700d7e681a6fead373db9a26f5a778d396cbc0 not found: ID does not exist" containerID="edcd40174a08804c61cf34ee73700d7e681a6fead373db9a26f5a778d396cbc0"
	Jul 22 00:37:09 addons-783853 kubelet[1534]: I0722 00:37:09.914744    1534 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edcd40174a08804c61cf34ee73700d7e681a6fead373db9a26f5a778d396cbc0"} err="failed to get container status \"edcd40174a08804c61cf34ee73700d7e681a6fead373db9a26f5a778d396cbc0\": rpc error: code = NotFound desc = could not find container \"edcd40174a08804c61cf34ee73700d7e681a6fead373db9a26f5a778d396cbc0\": container with ID starting with edcd40174a08804c61cf34ee73700d7e681a6fead373db9a26f5a778d396cbc0 not found: ID does not exist"
	Jul 22 00:37:09 addons-783853 kubelet[1534]: I0722 00:37:09.997865    1534 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362-tmp-dir\") on node \"addons-783853\" DevicePath \"\""
	Jul 22 00:37:09 addons-783853 kubelet[1534]: I0722 00:37:09.997912    1534 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qnf8r\" (UniqueName: \"kubernetes.io/projected/3ecb4a8a-e4fc-46e1-b6cb-e0a2f7adc362-kube-api-access-qnf8r\") on node \"addons-783853\" DevicePath \"\""
	
	
	==> storage-provisioner [461123cde69274b0178f9b430cab234c44f0fea1cb24d5aea19d9e852053d4cc] <==
	I0722 00:28:57.018809       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:28:57.038428       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:28:57.038483       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 00:28:57.047567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 00:28:57.047729       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-783853_c4cfa4ab-8726-4790-b31b-4df7b6a36898!
	I0722 00:28:57.047786       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0480adff-28ec-454a-a5e8-4dbbc5a90dfd", APIVersion:"v1", ResourceVersion:"911", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-783853_c4cfa4ab-8726-4790-b31b-4df7b6a36898 became leader
	I0722 00:28:57.148393       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-783853_c4cfa4ab-8726-4790-b31b-4df7b6a36898!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-783853 -n addons-783853
helpers_test.go:261: (dbg) Run:  kubectl --context addons-783853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (281.94s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (203.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d5c72237-9137-4541-8b6a-7d14b7510626] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00468988s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-464385 get storageclass -o=json
E0722 00:41:08.236409  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
E0722 00:41:08.276593  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-464385 apply -f testdata/storage-provisioner/pvc.yaml
E0722 00:41:08.357022  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-464385 get pvc myclaim -o=json
E0722 00:41:08.517919  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
E0722 00:41:08.839025  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
E0722 00:41:09.479290  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-464385 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-464385 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7b007a0a-d63a-452e-af58-35dd98d26065] Pending
helpers_test.go:344: "sp-pod" [7b007a0a-d63a-452e-af58-35dd98d26065] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0722 00:41:13.319849  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [7b007a0a-d63a-452e-af58-35dd98d26065] Running
E0722 00:41:18.441142  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003614457s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-464385 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-464385 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-464385 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6b69a435-36b5-406f-8bae-8bea28907074] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-464385 -n functional-464385
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-07-22 00:44:23.374085794 +0000 UTC m=+1061.181567507
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-464385 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-464385 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-464385/192.168.49.2
Start Time:       Mon, 22 Jul 2024 00:41:23 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kk5tw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-kk5tw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m                    default-scheduler  Successfully assigned default/sp-pod to functional-464385
Warning  Failed     102s (x2 over 2m30s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     47s (x3 over 2m30s)   kubelet            Error: ErrImagePull
Warning  Failed     47s                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    18s (x4 over 2m30s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     18s (x4 over 2m30s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    5s (x4 over 3m)       kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-464385 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-464385 logs sp-pod -n default: exit status 1 (96.09177ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-464385 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-464385
helpers_test.go:235: (dbg) docker inspect functional-464385:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "17c7a943af807740e076781b7a0528e6e26c7acba136f66f424a80ca7d1f34f4",
	        "Created": "2024-07-22T00:38:35.195805252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 549804,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-22T00:38:35.330989087Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2c91a2178aa1acdb3eade350c62303b0cf135b362b91c6aa21cd060c2dbfcac",
	        "ResolvConfPath": "/var/lib/docker/containers/17c7a943af807740e076781b7a0528e6e26c7acba136f66f424a80ca7d1f34f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/17c7a943af807740e076781b7a0528e6e26c7acba136f66f424a80ca7d1f34f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/17c7a943af807740e076781b7a0528e6e26c7acba136f66f424a80ca7d1f34f4/hosts",
	        "LogPath": "/var/lib/docker/containers/17c7a943af807740e076781b7a0528e6e26c7acba136f66f424a80ca7d1f34f4/17c7a943af807740e076781b7a0528e6e26c7acba136f66f424a80ca7d1f34f4-json.log",
	        "Name": "/functional-464385",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-464385:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-464385",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d374266dfcddbaa602010b66e2bc7d951308f034c9f00a7aaf1cf0decd8f4a50-init/diff:/var/lib/docker/overlay2/0bbbe9537bb983273c69d2396c833f2bdeab0de0333f7a8438fa8a8aec393d0a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d374266dfcddbaa602010b66e2bc7d951308f034c9f00a7aaf1cf0decd8f4a50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d374266dfcddbaa602010b66e2bc7d951308f034c9f00a7aaf1cf0decd8f4a50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d374266dfcddbaa602010b66e2bc7d951308f034c9f00a7aaf1cf0decd8f4a50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-464385",
	                "Source": "/var/lib/docker/volumes/functional-464385/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-464385",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-464385",
	                "name.minikube.sigs.k8s.io": "functional-464385",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aa07738252b776e56c363fecd27c3f90f674694b3c9cfac744f7b38fbed2080f",
	            "SandboxKey": "/var/run/docker/netns/aa07738252b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38991"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38992"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38995"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38993"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38994"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-464385": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0f3f857aa5e834af17945f6ac70dc8601ce83e4b1667e266f1211bf1aa85b3b3",
	                    "EndpointID": "5634373c1cc50547b185d4515a8d68d89ea76cad4ae51d244fe76603884320e6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-464385",
	                        "17c7a943af80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-464385 -n functional-464385
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-464385 logs -n 25: (1.649661455s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-464385 image ls                                           | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	| image          | functional-464385 image load --daemon                                | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | kicbase/echo-server:functional-464385                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-464385 image ls                                           | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	| image          | functional-464385 image save kicbase/echo-server:functional-464385   | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-464385 image rm                                           | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | kicbase/echo-server:functional-464385                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-464385 image ls                                           | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	| image          | functional-464385 image load                                         | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-464385 image save --daemon                                | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | kicbase/echo-server:functional-464385                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| ssh            | functional-464385 ssh sudo cat                                       | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | /etc/test/nested/copy/532157/hosts                                   |                   |         |         |                     |                     |
	| ssh            | functional-464385 ssh sudo cat                                       | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | /etc/ssl/certs/532157.pem                                            |                   |         |         |                     |                     |
	| ssh            | functional-464385 ssh sudo cat                                       | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | /usr/share/ca-certificates/532157.pem                                |                   |         |         |                     |                     |
	| ssh            | functional-464385 ssh sudo cat                                       | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | /etc/ssl/certs/51391683.0                                            |                   |         |         |                     |                     |
	| ssh            | functional-464385 ssh sudo cat                                       | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | /etc/ssl/certs/5321572.pem                                           |                   |         |         |                     |                     |
	| ssh            | functional-464385 ssh sudo cat                                       | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | /usr/share/ca-certificates/5321572.pem                               |                   |         |         |                     |                     |
	| ssh            | functional-464385 ssh sudo cat                                       | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                            |                   |         |         |                     |                     |
	| image          | functional-464385                                                    | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | image ls --format short                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| update-context | functional-464385                                                    | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| ssh            | functional-464385 ssh pgrep                                          | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC |                     |
	|                | buildkitd                                                            |                   |         |         |                     |                     |
	| image          | functional-464385 image build -t                                     | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | localhost/my-image:functional-464385                                 |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                     |                   |         |         |                     |                     |
	| image          | functional-464385 image ls                                           | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	| image          | functional-464385                                                    | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | image ls --format yaml                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-464385                                                    | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | image ls --format json                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-464385                                                    | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | image ls --format table                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| update-context | functional-464385                                                    | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-464385                                                    | functional-464385 | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:42:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:42:03.615077  560323 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:42:03.615221  560323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:03.615247  560323 out.go:304] Setting ErrFile to fd 2...
	I0722 00:42:03.615258  560323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:03.615538  560323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 00:42:03.615936  560323 out.go:298] Setting JSON to false
	I0722 00:42:03.616993  560323 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116674,"bootTime":1721492249,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0722 00:42:03.617064  560323 start.go:139] virtualization:  
	I0722 00:42:03.619318  560323 out.go:177] * [functional-464385] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0722 00:42:03.621487  560323 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:42:03.621558  560323 notify.go:220] Checking for updates...
	I0722 00:42:03.624920  560323 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:42:03.626771  560323 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	I0722 00:42:03.628446  560323 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	I0722 00:42:03.630197  560323 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0722 00:42:03.631790  560323 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:42:03.634015  560323 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:42:03.634568  560323 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:42:03.661883  560323 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0722 00:42:03.661998  560323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:42:03.723690  560323 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-22 00:42:03.71455421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:42:03.723812  560323 docker.go:307] overlay module found
	I0722 00:42:03.725748  560323 out.go:177] * Using the docker driver based on existing profile
	I0722 00:42:03.727242  560323 start.go:297] selected driver: docker
	I0722 00:42:03.727266  560323 start.go:901] validating driver "docker" against &{Name:functional-464385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-464385 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:42:03.727394  560323 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:42:03.727505  560323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:42:03.791327  560323 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-22 00:42:03.781720367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:42:03.791764  560323 cni.go:84] Creating CNI manager for ""
	I0722 00:42:03.791783  560323 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0722 00:42:03.791849  560323 start.go:340] cluster config:
	{Name:functional-464385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-464385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:42:03.795188  560323 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Jul 22 00:42:11 functional-464385 crio[4159]: time="2024-07-22 00:42:11.391656487Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Jul 22 00:42:11 functional-464385 crio[4159]: time="2024-07-22 00:42:11.393327392Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vnmkm/dashboard-metrics-scraper" id=8c376d95-f951-4203-b215-7b5e12348524 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 22 00:42:11 functional-464385 crio[4159]: time="2024-07-22 00:42:11.393423500Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 22 00:42:11 functional-464385 crio[4159]: time="2024-07-22 00:42:11.414254537Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/96fb938854d9404669ed133fa548ba88869d7f0c4e9a545b4abf0c93a59863bd/merged/etc/group: no such file or directory"
	Jul 22 00:42:11 functional-464385 crio[4159]: time="2024-07-22 00:42:11.463585790Z" level=info msg="Created container f62fd67086942a37a0d1fcd6d612b1baa018ba77887c00c1105cd0c5fc802fbe: kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vnmkm/dashboard-metrics-scraper" id=8c376d95-f951-4203-b215-7b5e12348524 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 22 00:42:11 functional-464385 crio[4159]: time="2024-07-22 00:42:11.464469921Z" level=info msg="Starting container: f62fd67086942a37a0d1fcd6d612b1baa018ba77887c00c1105cd0c5fc802fbe" id=ae06a410-92bf-4c16-a2ff-e73e9570497e name=/runtime.v1.RuntimeService/StartContainer
	Jul 22 00:42:11 functional-464385 crio[4159]: time="2024-07-22 00:42:11.483038004Z" level=info msg="Started container" PID=6740 containerID=f62fd67086942a37a0d1fcd6d612b1baa018ba77887c00c1105cd0c5fc802fbe description=kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vnmkm/dashboard-metrics-scraper id=ae06a410-92bf-4c16-a2ff-e73e9570497e name=/runtime.v1.RuntimeService/StartContainer sandboxID=08152081615b9d2dbe106004ccaf1cab6436f15783aa52f57ceac364b71dd4ae
	Jul 22 00:42:12 functional-464385 crio[4159]: time="2024-07-22 00:42:12.899602000Z" level=info msg="Checking image status: kicbase/echo-server:functional-464385" id=0bb44724-4618-4319-8d09-318dd3b44dab name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:42:13 functional-464385 crio[4159]: time="2024-07-22 00:42:13.740626401Z" level=info msg="Checking image status: kicbase/echo-server:functional-464385" id=b3a4f71a-c918-457b-8572-df4efdf5fbd4 name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:42:14 functional-464385 crio[4159]: time="2024-07-22 00:42:14.833133360Z" level=info msg="Checking image status: kicbase/echo-server:functional-464385" id=d88c1359-0130-45c1-9b60-7cc94d81f349 name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:42:15 functional-464385 crio[4159]: time="2024-07-22 00:42:15.575565451Z" level=info msg="Checking image status: kicbase/echo-server:functional-464385" id=e576af41-3c99-4ecf-9c65-223374a6381c name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:42:52 functional-464385 crio[4159]: time="2024-07-22 00:42:52.598185785Z" level=info msg="Checking image status: docker.io/nginx:latest" id=5bf62e1e-60a5-4b5b-9f1e-b3f95bdc0051 name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:42:52 functional-464385 crio[4159]: time="2024-07-22 00:42:52.598405948Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e],Size_:197104786,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5bf62e1e-60a5-4b5b-9f1e-b3f95bdc0051 name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:43:06 functional-464385 crio[4159]: time="2024-07-22 00:43:06.598485781Z" level=info msg="Checking image status: docker.io/nginx:latest" id=d864fb79-4214-4b8e-a5f2-b2682973b624 name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:43:06 functional-464385 crio[4159]: time="2024-07-22 00:43:06.598715618Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e],Size_:197104786,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d864fb79-4214-4b8e-a5f2-b2682973b624 name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:43:06 functional-464385 crio[4159]: time="2024-07-22 00:43:06.599729957Z" level=info msg="Pulling image: docker.io/nginx:latest" id=3f66d51d-62f8-4ccf-9928-9162f457c36a name=/runtime.v1.ImageService/PullImage
	Jul 22 00:43:06 functional-464385 crio[4159]: time="2024-07-22 00:43:06.602167141Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Jul 22 00:43:51 functional-464385 crio[4159]: time="2024-07-22 00:43:51.597735526Z" level=info msg="Checking image status: docker.io/nginx:latest" id=32f647ed-126b-4708-81c1-d83e56643cdd name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:43:51 functional-464385 crio[4159]: time="2024-07-22 00:43:51.597968160Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e],Size_:197104786,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=32f647ed-126b-4708-81c1-d83e56643cdd name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:44:05 functional-464385 crio[4159]: time="2024-07-22 00:44:05.597957783Z" level=info msg="Checking image status: docker.io/nginx:latest" id=37657360-fc5d-4319-a1b1-c01ac252a635 name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:44:05 functional-464385 crio[4159]: time="2024-07-22 00:44:05.598190893Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e],Size_:197104786,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=37657360-fc5d-4319-a1b1-c01ac252a635 name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:44:18 functional-464385 crio[4159]: time="2024-07-22 00:44:18.598224397Z" level=info msg="Checking image status: docker.io/nginx:latest" id=02968e1c-c69a-413c-875a-840742d74bc9 name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:44:18 functional-464385 crio[4159]: time="2024-07-22 00:44:18.598442483Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e],Size_:197104786,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=02968e1c-c69a-413c-875a-840742d74bc9 name=/runtime.v1.ImageService/ImageStatus
	Jul 22 00:44:18 functional-464385 crio[4159]: time="2024-07-22 00:44:18.600004082Z" level=info msg="Pulling image: docker.io/nginx:latest" id=edc85700-2973-4730-950a-799b993ff60d name=/runtime.v1.ImageService/PullImage
	Jul 22 00:44:18 functional-464385 crio[4159]: time="2024-07-22 00:44:18.602896812Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	f62fd67086942       docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   2 minutes ago       Running             dashboard-metrics-scraper   0                   08152081615b9       dashboard-metrics-scraper-b5fc48f67-vnmkm
	448cc46b141ed       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   e23c316ad7e59       kubernetes-dashboard-779776cb65-gshsw
	85d95f023678d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              2 minutes ago       Exited              mount-munger                0                   fa31d1ca27baa       busybox-mount
	ba2436a763ab1       72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb                                                 2 minutes ago       Running             echoserver-arm              0                   85c232f550b43       hello-node-65f5d5cc78-j7kzk
	1f5e083dc6691       registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5           3 minutes ago       Running             echoserver-arm              0                   e3ded68099f83       hello-node-connect-6f49f58cd5-5slm5
	0abe97aaef307       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                  3 minutes ago       Running             nginx                       0                   03719826dcae1       nginx-svc
	03e105f7526c2       f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800                                                 3 minutes ago       Running             kindnet-cni                 2                   a88ea5aa4906d       kindnet-q8hbj
	a5e553543f80e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 3 minutes ago       Running             storage-provisioner         2                   0b710508ef919       storage-provisioner
	fd3d3692bb0a2       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                 3 minutes ago       Running             coredns                     2                   be4f89fa793eb       coredns-7db6d8ff4d-ntgxk
	4ae0e67dc21f3       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                 3 minutes ago       Running             kube-proxy                  2                   333a428cc0545       kube-proxy-jxvnl
	f1781c88b4a7d       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                 3 minutes ago       Running             kube-apiserver              0                   ea844f4ac938b       kube-apiserver-functional-464385
	3ed726f83322c       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                 3 minutes ago       Running             kube-scheduler              2                   44126b1af8005       kube-scheduler-functional-464385
	92bf1490c0061       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                 3 minutes ago       Running             kube-controller-manager     2                   1996752a3b519       kube-controller-manager-functional-464385
	e0e78b1153cf1       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                 3 minutes ago       Running             etcd                        2                   70978ffae5eaf       etcd-functional-464385
	8f734a6cfa843       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                 4 minutes ago       Exited              coredns                     1                   be4f89fa793eb       coredns-7db6d8ff4d-ntgxk
	b03ec7b202ec6       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                 4 minutes ago       Exited              kube-controller-manager     1                   1996752a3b519       kube-controller-manager-functional-464385
	01ba1ba3a353a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 4 minutes ago       Exited              storage-provisioner         1                   0b710508ef919       storage-provisioner
	a5caad6648cf3       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                 4 minutes ago       Exited              kube-proxy                  1                   333a428cc0545       kube-proxy-jxvnl
	78a5ce73db079       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                 4 minutes ago       Exited              kube-scheduler              1                   44126b1af8005       kube-scheduler-functional-464385
	343354daa0ca9       f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800                                                 4 minutes ago       Exited              kindnet-cni                 1                   a88ea5aa4906d       kindnet-q8hbj
	ea11d0d8e9154       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                 4 minutes ago       Exited              etcd                        1                   70978ffae5eaf       etcd-functional-464385
	
	
	==> coredns [8f734a6cfa8431d8e41c5a86bec7324fa8b4650af27a825259a8dbc09aea72c1] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56159 - 33485 "HINFO IN 8750890644244388656.7783934094350945527. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026744779s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fd3d3692bb0a27ddaa0bb8bb4022058666cb9bea833ea4eb6229d7a302ccb380] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58762 - 8083 "HINFO IN 5027191191932783633.5535397494336779448. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.079226939s
	
	
	==> describe nodes <==
	Name:               functional-464385
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-464385
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=functional-464385
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_39_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:38:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-464385
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:44:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:42:36 +0000   Mon, 22 Jul 2024 00:38:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:42:36 +0000   Mon, 22 Jul 2024 00:38:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:42:36 +0000   Mon, 22 Jul 2024 00:38:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:42:36 +0000   Mon, 22 Jul 2024 00:39:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-464385
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 5142f8629c914119979b6c62f6dec071
	  System UUID:                f0effb64-5da1-49dc-ba32-a3015005e996
	  Boot ID:                    7a479143-663f-4f08-926c-92bb931337b4
	  Kernel Version:             5.15.0-1064-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-j7kzk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  default                     hello-node-connect-6f49f58cd5-5slm5          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	  kube-system                 coredns-7db6d8ff4d-ntgxk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m8s
	  kube-system                 etcd-functional-464385                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m22s
	  kube-system                 kindnet-q8hbj                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m8s
	  kube-system                 kube-apiserver-functional-464385             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-controller-manager-functional-464385    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 kube-proxy-jxvnl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-scheduler-functional-464385             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-vnmkm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-gshsw        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m6s                   kube-proxy       
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  Starting                 4m36s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    5m29s (x8 over 5m29s)  kubelet          Node functional-464385 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m29s (x8 over 5m29s)  kubelet          Node functional-464385 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m29s (x8 over 5m29s)  kubelet          Node functional-464385 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m22s                  kubelet          Node functional-464385 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s                  kubelet          Node functional-464385 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s                  kubelet          Node functional-464385 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m22s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           5m9s                   node-controller  Node functional-464385 event: Registered Node functional-464385 in Controller
	  Normal  NodeReady                4m55s                  kubelet          Node functional-464385 status is now: NodeReady
	  Normal  RegisteredNode           4m25s                  node-controller  Node functional-464385 event: Registered Node functional-464385 in Controller
	  Normal  Starting                 3m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s (x8 over 3m56s)  kubelet          Node functional-464385 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x8 over 3m56s)  kubelet          Node functional-464385 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x8 over 3m56s)  kubelet          Node functional-464385 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m38s                  node-controller  Node functional-464385 event: Registered Node functional-464385 in Controller
	
	
	==> dmesg <==
	[  +0.001106] FS-Cache: O-key=[8] 'd47c3b0000000000'
	[  +0.000791] FS-Cache: N-cookie c=0000013e [p=00000135 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=0000000069f32ce1
	[  +0.001119] FS-Cache: N-key=[8] 'd47c3b0000000000'
	[  +0.002782] FS-Cache: Duplicate cookie detected
	[  +0.000819] FS-Cache: O-cookie c=00000138 [p=00000135 fl=226 nc=0 na=1]
	[  +0.001016] FS-Cache: O-cookie d=00000000656be40d{9p.inode} n=0000000098758b80
	[  +0.001138] FS-Cache: O-key=[8] 'd47c3b0000000000'
	[  +0.000748] FS-Cache: N-cookie c=0000013f [p=00000135 fl=2 nc=0 na=1]
	[  +0.001054] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=000000006c84cba6
	[  +0.001114] FS-Cache: N-key=[8] 'd47c3b0000000000'
	[  +2.365118] FS-Cache: Duplicate cookie detected
	[  +0.000734] FS-Cache: O-cookie c=00000136 [p=00000135 fl=226 nc=0 na=1]
	[  +0.001001] FS-Cache: O-cookie d=00000000656be40d{9p.inode} n=00000000bb51f306
	[  +0.001074] FS-Cache: O-key=[8] 'd37c3b0000000000'
	[  +0.000734] FS-Cache: N-cookie c=00000141 [p=00000135 fl=2 nc=0 na=1]
	[  +0.001004] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=0000000069f32ce1
	[  +0.001101] FS-Cache: N-key=[8] 'd37c3b0000000000'
	[  +0.262357] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=0000013b [p=00000135 fl=226 nc=0 na=1]
	[  +0.001028] FS-Cache: O-cookie d=00000000656be40d{9p.inode} n=000000000367e7fa
	[  +0.001122] FS-Cache: O-key=[8] 'd97c3b0000000000'
	[  +0.000790] FS-Cache: N-cookie c=00000142 [p=00000135 fl=2 nc=0 na=1]
	[  +0.000998] FS-Cache: N-cookie d=00000000656be40d{9p.inode} n=00000000531dee6c
	[  +0.001105] FS-Cache: N-key=[8] 'd97c3b0000000000'
	
	
	==> etcd [e0e78b1153cf1000c1c6b9af324fa3c10129dab89243fa7aebe8abb7b480b735] <==
	{"level":"info","ts":"2024-07-22T00:40:29.685679Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T00:40:29.684794Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-22T00:40:29.692832Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-22T00:40:29.684872Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-22T00:40:29.685002Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-22T00:40:29.692979Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-22T00:40:29.693035Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-22T00:40:29.685399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-07-22T00:40:29.696975Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-07-22T00:40:29.697163Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:40:29.697237Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:40:30.92878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-22T00:40:30.928895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-22T00:40:30.928938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-07-22T00:40:30.928977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-07-22T00:40:30.92901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-07-22T00:40:30.929049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-07-22T00:40:30.929096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-07-22T00:40:30.935625Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-464385 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:40:30.935738Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:40:30.935774Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:40:30.941891Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-22T00:40:30.943447Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T00:40:30.954117Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:40:30.954235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [ea11d0d8e9154ab55e4fa631810cc0dcec72150237fa02600cc8b0e81e9d04a0] <==
	{"level":"info","ts":"2024-07-22T00:39:42.921156Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-22T00:39:44.696782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-22T00:39:44.696902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-22T00:39:44.696946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-22T00:39:44.696984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-07-22T00:39:44.697017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-07-22T00:39:44.697069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-07-22T00:39:44.697103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-07-22T00:39:44.704929Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-464385 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:39:44.705127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:39:44.710107Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:39:44.710446Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:39:44.710504Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:39:44.717444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T00:39:44.75823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-22T00:40:12.512054Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-22T00:40:12.512103Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-464385","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-07-22T00:40:12.512191Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:40:12.512212Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:40:12.512273Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:40:12.512343Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T00:40:12.55852Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-07-22T00:40:12.561078Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-22T00:40:12.561195Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-22T00:40:12.561211Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-464385","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 00:44:24 up 1 day,  8:26,  0 users,  load average: 0.31, 1.08, 1.66
	Linux functional-464385 5.15.0-1064-aws #70~20.04.1-Ubuntu SMP Thu Jun 27 14:52:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [03e105f7526c2cb1853ed692e122e2d800baa51ffb7840b470b5b49dee470764] <==
	I0722 00:43:14.357907       1 main.go:299] handling current node
	I0722 00:43:24.358359       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:43:24.358395       1 main.go:299] handling current node
	W0722 00:43:26.530926       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0722 00:43:26.530965       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0722 00:43:34.358257       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:43:34.358298       1 main.go:299] handling current node
	W0722 00:43:38.520012       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:43:38.520059       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0722 00:43:44.358141       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:43:44.358177       1 main.go:299] handling current node
	W0722 00:43:53.350934       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:43:53.350971       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0722 00:43:54.358478       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:43:54.358524       1 main.go:299] handling current node
	I0722 00:44:04.357590       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:44:04.357621       1 main.go:299] handling current node
	I0722 00:44:14.358352       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:44:14.358385       1 main.go:299] handling current node
	W0722 00:44:22.042389       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:44:22.042526       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:44:22.237333       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0722 00:44:22.237372       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0722 00:44:24.357981       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:44:24.358015       1 main.go:299] handling current node
	
	
	==> kindnet [343354daa0ca9f7893ec4f7055bf6f336e6b0dd6c6607e75621fcf8b0cdd14de] <==
	E0722 00:39:49.052771       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:39:49.137185       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:39:49.137217       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:39:51.590238       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0722 00:39:51.590271       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0722 00:39:51.628689       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:39:51.628724       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:39:51.924825       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:39:51.924856       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0722 00:39:52.813057       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:39:52.813173       1 main.go:299] handling current node
	W0722 00:39:55.660352       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:39:55.660390       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:39:56.129695       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:39:56.129730       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:39:56.994048       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0722 00:39:56.994085       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0722 00:40:02.807028       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0722 00:40:02.807068       1 main.go:299] handling current node
	W0722 00:40:04.720530       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:40:04.720569       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:40:04.987604       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:40:04.987646       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:40:06.031821       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0722 00:40:06.031860       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [f1781c88b4a7d01f542aebb3c71fc48cb0bcdb1ba794f473268fd5d58fad1f84] <==
	I0722 00:40:33.430750       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 00:40:33.436990       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0722 00:40:33.439056       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0722 00:40:33.453363       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 00:40:33.453468       1 aggregator.go:165] initial CRD sync complete...
	I0722 00:40:33.453482       1 autoregister_controller.go:141] Starting autoregister controller
	I0722 00:40:33.453490       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 00:40:33.453496       1 cache.go:39] Caches are synced for autoregister controller
	I0722 00:40:34.178931       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 00:40:35.113907       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 00:40:35.237154       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 00:40:35.251921       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 00:40:35.338033       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 00:40:35.361959       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 00:40:51.580470       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 00:40:55.512034       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.37.33"}
	I0722 00:40:55.529606       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 00:41:02.231469       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.39.223"}
	I0722 00:41:10.729453       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0722 00:41:10.861271       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.102.224"}
	E0722 00:41:22.495463       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60194: use of closed network connection
	I0722 00:41:25.438883       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.160.192"}
	I0722 00:42:04.952714       1 controller.go:615] quota admission added evaluator for: namespaces
	I0722 00:42:05.196540       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.160.134"}
	I0722 00:42:05.239725       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.206.70"}
	
	
	==> kube-controller-manager [92bf1490c006188d220e7c8b6cfb1573178d749b6870fa7252c75b43aab2b794] <==
	E0722 00:42:05.057350       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0722 00:42:05.088940       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="31.536981ms"
	E0722 00:42:05.088978       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0722 00:42:05.089334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="33.19461ms"
	E0722 00:42:05.089365       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0722 00:42:05.102720       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="13.712718ms"
	E0722 00:42:05.102761       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0722 00:42:05.102827       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="12.334453ms"
	E0722 00:42:05.102850       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0722 00:42:05.110516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="6.237143ms"
	E0722 00:42:05.110648       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0722 00:42:05.115830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="5.436993ms"
	E0722 00:42:05.115966       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0722 00:42:05.151855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="36.453442ms"
	I0722 00:42:05.169815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="17.831072ms"
	I0722 00:42:05.170085       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="131.571µs"
	I0722 00:42:05.170132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="17.412µs"
	I0722 00:42:05.229113       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="32.231743ms"
	I0722 00:42:05.280673       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="51.498599ms"
	I0722 00:42:05.280853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="52.596µs"
	I0722 00:42:05.282911       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="55.516µs"
	I0722 00:42:09.979946       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="8.171674ms"
	I0722 00:42:09.981214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="43.496µs"
	I0722 00:42:11.991175       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="14.054327ms"
	I0722 00:42:11.991316       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="45.884µs"
	
	
	==> kube-controller-manager [b03ec7b202ec6ec53b0feea41c49e725f571e42c3901ca6c4ecd72e45b290ec6] <==
	I0722 00:39:59.998885       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0722 00:39:59.999010       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0722 00:39:59.999134       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0722 00:40:00.000788       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0722 00:40:00.004810       1 shared_informer.go:320] Caches are synced for PVC protection
	I0722 00:40:00.023816       1 shared_informer.go:320] Caches are synced for attach detach
	I0722 00:40:00.035983       1 shared_informer.go:320] Caches are synced for namespace
	I0722 00:40:00.038307       1 shared_informer.go:320] Caches are synced for daemon sets
	I0722 00:40:00.038388       1 shared_informer.go:320] Caches are synced for service account
	I0722 00:40:00.042039       1 shared_informer.go:320] Caches are synced for deployment
	I0722 00:40:00.045986       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0722 00:40:00.049917       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0722 00:40:00.050059       1 shared_informer.go:320] Caches are synced for endpoint
	I0722 00:40:00.049949       1 shared_informer.go:320] Caches are synced for ephemeral
	I0722 00:40:00.066110       1 shared_informer.go:320] Caches are synced for disruption
	I0722 00:40:00.075923       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0722 00:40:00.082240       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0722 00:40:00.082819       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.087µs"
	I0722 00:40:00.150683       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0722 00:40:00.153315       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0722 00:40:00.226232       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 00:40:00.228316       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 00:40:00.658244       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 00:40:00.658280       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0722 00:40:00.661489       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [4ae0e67dc21f32ceed2ddfed8cfa49856f8e9952c709481afd95ca29ceef466f] <==
	I0722 00:40:34.147680       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:40:34.181337       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0722 00:40:34.317049       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0722 00:40:34.317105       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:40:34.321325       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0722 00:40:34.321359       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0722 00:40:34.321391       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:40:34.321604       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:40:34.321628       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:40:34.323400       1 config.go:192] "Starting service config controller"
	I0722 00:40:34.323427       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:40:34.323451       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:40:34.323456       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:40:34.323917       1 config.go:319] "Starting node config controller"
	I0722 00:40:34.323933       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:40:34.425307       1 shared_informer.go:320] Caches are synced for service config
	I0722 00:40:34.425383       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:40:34.425961       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a5caad6648cf3989a8ea943fdb5923e83a385d397aacce5e43d56a60e7cbf209] <==
	I0722 00:39:44.406440       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:39:48.005183       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0722 00:39:48.517560       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0722 00:39:48.519086       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:39:48.600614       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0722 00:39:48.600756       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0722 00:39:48.600831       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:39:48.601164       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:39:48.601509       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:39:48.602774       1 config.go:192] "Starting service config controller"
	I0722 00:39:48.602871       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:39:48.602986       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:39:48.603032       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:39:48.603748       1 config.go:319] "Starting node config controller"
	I0722 00:39:48.603838       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:39:48.706046       1 shared_informer.go:320] Caches are synced for node config
	I0722 00:39:48.706204       1 shared_informer.go:320] Caches are synced for service config
	I0722 00:39:48.706286       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3ed726f83322cd61109257fc10b80f0938673eb269d863447a10ba22beef3600] <==
	I0722 00:40:32.324001       1 serving.go:380] Generated self-signed cert in-memory
	I0722 00:40:33.849849       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 00:40:33.853385       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:40:33.866985       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 00:40:33.867163       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0722 00:40:33.867201       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0722 00:40:33.867261       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 00:40:33.867899       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 00:40:33.867970       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 00:40:33.870085       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0722 00:40:33.870161       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0722 00:40:34.067336       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0722 00:40:34.068852       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 00:40:34.072819       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [78a5ce73db079009b01171d8fe27ed922c82194768526b4ecf3e2bfb985bc2b5] <==
	I0722 00:39:46.960444       1 serving.go:380] Generated self-signed cert in-memory
	I0722 00:39:48.697132       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 00:39:48.697164       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:39:48.717959       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0722 00:39:48.718070       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0722 00:39:48.718230       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 00:39:48.718268       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 00:39:48.718321       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0722 00:39:48.718352       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0722 00:39:48.719453       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 00:39:48.719539       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 00:39:48.819016       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 00:39:48.819093       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0722 00:39:48.819768       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0722 00:40:12.513642       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0722 00:40:12.513708       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0722 00:40:12.513832       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0722 00:40:12.513855       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController
	I0722 00:40:12.513879       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0722 00:40:12.515645       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 22 00:41:56 functional-464385 kubelet[4448]: I0722 00:41:56.993874    4448 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b1d1ae-967f-4381-8ce9-16799653e0ce-kube-api-access-kmtln" (OuterVolumeSpecName: "kube-api-access-kmtln") pod "88b1d1ae-967f-4381-8ce9-16799653e0ce" (UID: "88b1d1ae-967f-4381-8ce9-16799653e0ce"). InnerVolumeSpecName "kube-api-access-kmtln". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 22 00:41:57 functional-464385 kubelet[4448]: I0722 00:41:57.092806    4448 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/88b1d1ae-967f-4381-8ce9-16799653e0ce-test-volume\") on node \"functional-464385\" DevicePath \"\""
	Jul 22 00:41:57 functional-464385 kubelet[4448]: I0722 00:41:57.092856    4448 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kmtln\" (UniqueName: \"kubernetes.io/projected/88b1d1ae-967f-4381-8ce9-16799653e0ce-kube-api-access-kmtln\") on node \"functional-464385\" DevicePath \"\""
	Jul 22 00:41:57 functional-464385 kubelet[4448]: I0722 00:41:57.933632    4448 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa31d1ca27baa15eec035ca32f56354a9a14333cdf2af0bf75b4b56776dbd523"
	Jul 22 00:42:05 functional-464385 kubelet[4448]: I0722 00:42:05.142552    4448 topology_manager.go:215] "Topology Admit Handler" podUID="c3e42d0f-c4da-4e7e-a78e-62b95b9378dc" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-gshsw"
	Jul 22 00:42:05 functional-464385 kubelet[4448]: E0722 00:42:05.142648    4448 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="88b1d1ae-967f-4381-8ce9-16799653e0ce" containerName="mount-munger"
	Jul 22 00:42:05 functional-464385 kubelet[4448]: I0722 00:42:05.142714    4448 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b1d1ae-967f-4381-8ce9-16799653e0ce" containerName="mount-munger"
	Jul 22 00:42:05 functional-464385 kubelet[4448]: I0722 00:42:05.230178    4448 topology_manager.go:215] "Topology Admit Handler" podUID="17ecb2dc-f2ce-4f22-95e4-9123543a3476" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-vnmkm"
	Jul 22 00:42:05 functional-464385 kubelet[4448]: I0722 00:42:05.260365    4448 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clknb\" (UniqueName: \"kubernetes.io/projected/c3e42d0f-c4da-4e7e-a78e-62b95b9378dc-kube-api-access-clknb\") pod \"kubernetes-dashboard-779776cb65-gshsw\" (UID: \"c3e42d0f-c4da-4e7e-a78e-62b95b9378dc\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-gshsw"
	Jul 22 00:42:05 functional-464385 kubelet[4448]: I0722 00:42:05.260429    4448 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/17ecb2dc-f2ce-4f22-95e4-9123543a3476-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-vnmkm\" (UID: \"17ecb2dc-f2ce-4f22-95e4-9123543a3476\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vnmkm"
	Jul 22 00:42:05 functional-464385 kubelet[4448]: I0722 00:42:05.260457    4448 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ntzn\" (UniqueName: \"kubernetes.io/projected/17ecb2dc-f2ce-4f22-95e4-9123543a3476-kube-api-access-4ntzn\") pod \"dashboard-metrics-scraper-b5fc48f67-vnmkm\" (UID: \"17ecb2dc-f2ce-4f22-95e4-9123543a3476\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vnmkm"
	Jul 22 00:42:05 functional-464385 kubelet[4448]: I0722 00:42:05.260489    4448 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c3e42d0f-c4da-4e7e-a78e-62b95b9378dc-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-gshsw\" (UID: \"c3e42d0f-c4da-4e7e-a78e-62b95b9378dc\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-gshsw"
	Jul 22 00:42:11 functional-464385 kubelet[4448]: I0722 00:42:11.976249    4448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-gshsw" podStartSLOduration=2.834316272 podStartE2EDuration="6.976211599s" podCreationTimestamp="2024-07-22 00:42:05 +0000 UTC" firstStartedPulling="2024-07-22 00:42:05.486131713 +0000 UTC m=+97.038347647" lastFinishedPulling="2024-07-22 00:42:09.628027032 +0000 UTC m=+101.180242974" observedRunningTime="2024-07-22 00:42:09.976273978 +0000 UTC m=+101.528489912" watchObservedRunningTime="2024-07-22 00:42:11.976211599 +0000 UTC m=+103.528427533"
	Jul 22 00:42:41 functional-464385 kubelet[4448]: E0722 00:42:41.680656    4448 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jul 22 00:42:41 functional-464385 kubelet[4448]: E0722 00:42:41.680725    4448 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jul 22 00:42:41 functional-464385 kubelet[4448]: E0722 00:42:41.681464    4448 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk5tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start
failed in pod sp-pod_default(6b69a435-36b5-406f-8bae-8bea28907074): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Jul 22 00:42:41 functional-464385 kubelet[4448]: E0722 00:42:41.681543    4448 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6b69a435-36b5-406f-8bae-8bea28907074"
	Jul 22 00:42:52 functional-464385 kubelet[4448]: E0722 00:42:52.599645    4448 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="6b69a435-36b5-406f-8bae-8bea28907074"
	Jul 22 00:42:52 functional-464385 kubelet[4448]: I0722 00:42:52.610108    4448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vnmkm" podStartSLOduration=41.789834431 podStartE2EDuration="47.610089626s" podCreationTimestamp="2024-07-22 00:42:05 +0000 UTC" firstStartedPulling="2024-07-22 00:42:05.565803426 +0000 UTC m=+97.118019360" lastFinishedPulling="2024-07-22 00:42:11.386058613 +0000 UTC m=+102.938274555" observedRunningTime="2024-07-22 00:42:11.978671397 +0000 UTC m=+103.530887339" watchObservedRunningTime="2024-07-22 00:42:52.610089626 +0000 UTC m=+144.162305560"
	Jul 22 00:43:36 functional-464385 kubelet[4448]: E0722 00:43:36.995860    4448 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jul 22 00:43:36 functional-464385 kubelet[4448]: E0722 00:43:36.995926    4448 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jul 22 00:43:36 functional-464385 kubelet[4448]: E0722 00:43:36.996026    4448 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk5tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start
failed in pod sp-pod_default(6b69a435-36b5-406f-8bae-8bea28907074): ErrImagePull: loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Jul 22 00:43:36 functional-464385 kubelet[4448]: E0722 00:43:36.996056    4448 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6b69a435-36b5-406f-8bae-8bea28907074"
	Jul 22 00:43:51 functional-464385 kubelet[4448]: E0722 00:43:51.598420    4448 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="6b69a435-36b5-406f-8bae-8bea28907074"
	Jul 22 00:44:05 functional-464385 kubelet[4448]: E0722 00:44:05.598439    4448 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="6b69a435-36b5-406f-8bae-8bea28907074"
	
	
	==> kubernetes-dashboard [448cc46b141ed7da4228da76b2928c6b5e89fef97d72647da56dff69763af48c] <==
	2024/07/22 00:42:09 Using namespace: kubernetes-dashboard
	2024/07/22 00:42:09 Using in-cluster config to connect to apiserver
	2024/07/22 00:42:09 Using secret token for csrf signing
	2024/07/22 00:42:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/22 00:42:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/22 00:42:09 Successful initial request to the apiserver, version: v1.30.3
	2024/07/22 00:42:09 Generating JWE encryption key
	2024/07/22 00:42:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/22 00:42:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/22 00:42:09 Initializing JWE encryption key from synchronized object
	2024/07/22 00:42:09 Creating in-cluster Sidecar client
	2024/07/22 00:42:09 Serving insecurely on HTTP port: 9090
	2024/07/22 00:42:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/22 00:42:39 Successful request to sidecar
	2024/07/22 00:42:09 Starting overwatch
	
	
	==> storage-provisioner [01ba1ba3a353a4b4bc429c0b53451f0780bb70fe59b0527a629f5242f7b7302b] <==
	I0722 00:39:43.320826       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:39:48.066248       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:39:48.066337       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 00:40:05.481238       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 00:40:05.481964       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-464385_68350faa-088a-4b24-91be-8a5099ae441e!
	I0722 00:40:05.482918       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1948835b-cf38-4acd-b455-23d0f3f885c9", APIVersion:"v1", ResourceVersion:"536", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-464385_68350faa-088a-4b24-91be-8a5099ae441e became leader
	I0722 00:40:05.582421       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-464385_68350faa-088a-4b24-91be-8a5099ae441e!
	
	
	==> storage-provisioner [a5e553543f80e1cc87daf61617343ee897e4cd93099842b37ddcbe5ffb0e445a] <==
	I0722 00:40:34.105103       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:40:34.172260       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:40:34.172398       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 00:40:51.585577       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 00:40:51.586081       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1948835b-cf38-4acd-b455-23d0f3f885c9", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-464385_67d2f3e3-3455-4997-807d-15801a52e73d became leader
	I0722 00:40:51.588793       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-464385_67d2f3e3-3455-4997-807d-15801a52e73d!
	I0722 00:40:51.689785       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-464385_67d2f3e3-3455-4997-807d-15801a52e73d!
	I0722 00:41:08.488958       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0722 00:41:08.489152       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"75eb1ca4-37d3-4ed0-ac17-667c37bf64c4", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0722 00:41:08.489104       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    63d42e00-3ecd-4cd5-a0d9-eb53e7e56d74 392 0 2024-07-22 00:39:16 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-22 00:39:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-75eb1ca4-37d3-4ed0-ac17-667c37bf64c4 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  75eb1ca4-37d3-4ed0-ac17-667c37bf64c4 693 0 2024-07-22 00:41:08 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-22 00:41:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-22 00:41:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0722 00:41:08.493776       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-75eb1ca4-37d3-4ed0-ac17-667c37bf64c4" provisioned
	I0722 00:41:08.493836       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0722 00:41:08.493883       1 volume_store.go:212] Trying to save persistentvolume "pvc-75eb1ca4-37d3-4ed0-ac17-667c37bf64c4"
	I0722 00:41:08.526252       1 volume_store.go:219] persistentvolume "pvc-75eb1ca4-37d3-4ed0-ac17-667c37bf64c4" saved
	I0722 00:41:08.529348       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"75eb1ca4-37d3-4ed0-ac17-667c37bf64c4", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-75eb1ca4-37d3-4ed0-ac17-667c37bf64c4
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-464385 -n functional-464385
helpers_test.go:261: (dbg) Run:  kubectl --context functional-464385 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-464385 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-464385 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-464385/192.168.49.2
	Start Time:       Mon, 22 Jul 2024 00:41:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://85d95f023678dfa978c6f2375acc8c8ba8327b1554e881d709e118f69b151568
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 22 Jul 2024 00:41:55 +0000
	      Finished:     Mon, 22 Jul 2024 00:41:55 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kmtln (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kmtln:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m48s  default-scheduler  Successfully assigned default/busybox-mount to functional-464385
	  Normal  Pulling    2m49s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m31s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.716s (17.515s including waiting). Image size: 3774172 bytes.
	  Normal  Created    2m31s  kubelet            Created container mount-munger
	  Normal  Started    2m31s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-464385/192.168.49.2
	Start Time:       Mon, 22 Jul 2024 00:41:23 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kk5tw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kk5tw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m3s                  default-scheduler  Successfully assigned default/sp-pod to functional-464385
	  Warning  Failed     105s (x2 over 2m33s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     50s (x3 over 2m33s)   kubelet            Error: ErrImagePull
	  Warning  Failed     50s                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    21s (x4 over 2m33s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     21s (x4 over 2m33s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    8s (x4 over 3m3s)     kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (203.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image load --daemon kicbase/echo-server:functional-464385 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image ls
functional_test.go:442: expected "kicbase/echo-server:functional-464385" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image load --daemon kicbase/echo-server:functional-464385 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image ls
functional_test.go:442: expected "kicbase/echo-server:functional-464385" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-464385
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image load --daemon kicbase/echo-server:functional-464385 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image ls
functional_test.go:442: expected "kicbase/echo-server:functional-464385" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image save kicbase/echo-server:functional-464385 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0722 00:42:15.865149  561202 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:42:15.866297  561202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:15.866314  561202 out.go:304] Setting ErrFile to fd 2...
	I0722 00:42:15.866320  561202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:15.866730  561202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 00:42:15.867450  561202 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:42:15.867667  561202 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:42:15.868192  561202 cli_runner.go:164] Run: docker container inspect functional-464385 --format={{.State.Status}}
	I0722 00:42:15.885767  561202 ssh_runner.go:195] Run: systemctl --version
	I0722 00:42:15.885867  561202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-464385
	I0722 00:42:15.902453  561202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38991 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/functional-464385/id_rsa Username:docker}
	I0722 00:42:15.989666  561202 cache_images.go:289] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W0722 00:42:15.989733  561202 cache_images.go:253] Failed to load cached images for "functional-464385": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I0722 00:42:15.989759  561202 cache_images.go:265] failed pushing to: functional-464385

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    

Test pass (295/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.41
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.30.3/json-events 6.8
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.2
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 8.87
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.39
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.31
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.54
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 237.42
38 TestAddons/parallel/Registry 14.65
40 TestAddons/parallel/InspektorGadget 11.77
44 TestAddons/parallel/CSI 57.3
45 TestAddons/parallel/Headlamp 11.99
46 TestAddons/parallel/CloudSpanner 6.56
47 TestAddons/parallel/LocalPath 52.4
48 TestAddons/parallel/NvidiaDevicePlugin 6.52
49 TestAddons/parallel/Yakd 5
53 TestAddons/serial/GCPAuth/Namespaces 0.17
54 TestAddons/StoppedEnableDisable 12.21
55 TestCertOptions 35.03
56 TestCertExpiration 281.89
58 TestForceSystemdFlag 40.02
59 TestForceSystemdEnv 42.63
65 TestErrorSpam/setup 31.1
66 TestErrorSpam/start 0.7
67 TestErrorSpam/status 0.96
68 TestErrorSpam/pause 1.67
69 TestErrorSpam/unpause 1.73
70 TestErrorSpam/stop 1.42
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 62.88
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 30.1
77 TestFunctional/serial/KubeContext 0.07
78 TestFunctional/serial/KubectlGetPods 0.11
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.33
82 TestFunctional/serial/CacheCmd/cache/add_local 1.03
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.08
87 TestFunctional/serial/CacheCmd/cache/delete 0.12
88 TestFunctional/serial/MinikubeKubectlCmd 0.13
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
90 TestFunctional/serial/ExtraConfig 40.8
91 TestFunctional/serial/ComponentHealth 0.1
92 TestFunctional/serial/LogsCmd 1.67
93 TestFunctional/serial/LogsFileCmd 1.71
94 TestFunctional/serial/InvalidService 4.56
96 TestFunctional/parallel/ConfigCmd 0.58
97 TestFunctional/parallel/DashboardCmd 6.75
98 TestFunctional/parallel/DryRun 0.39
99 TestFunctional/parallel/InternationalLanguage 0.18
100 TestFunctional/parallel/StatusCmd 0.97
104 TestFunctional/parallel/ServiceCmdConnect 14.6
105 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/SSHCmd 0.73
109 TestFunctional/parallel/CpCmd 2.38
111 TestFunctional/parallel/FileSync 0.28
112 TestFunctional/parallel/CertSync 1.58
116 TestFunctional/parallel/NodeLabels 0.1
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
120 TestFunctional/parallel/License 0.27
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
133 TestFunctional/parallel/ServiceCmd/List 0.5
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
136 TestFunctional/parallel/ServiceCmd/Format 0.37
137 TestFunctional/parallel/ServiceCmd/URL 0.38
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
139 TestFunctional/parallel/ProfileCmd/profile_list 0.38
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
141 TestFunctional/parallel/MountCmd/any-port 22.8
142 TestFunctional/parallel/MountCmd/specific-port 1.96
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.77
144 TestFunctional/parallel/Version/short 0.05
145 TestFunctional/parallel/Version/components 1.01
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
150 TestFunctional/parallel/ImageCommands/ImageBuild 2.61
151 TestFunctional/parallel/ImageCommands/Setup 0.72
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
159 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
160 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
161 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.01
164 TestFunctional/delete_minikube_cached_images 0.01
168 TestMultiControlPlane/serial/StartCluster 191.34
169 TestMultiControlPlane/serial/DeployApp 9.96
170 TestMultiControlPlane/serial/PingHostFromPods 1.53
171 TestMultiControlPlane/serial/AddWorkerNode 36.28
172 TestMultiControlPlane/serial/NodeLabels 0.11
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.71
174 TestMultiControlPlane/serial/CopyFile 18.46
175 TestMultiControlPlane/serial/StopSecondaryNode 12.7
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
177 TestMultiControlPlane/serial/RestartSecondaryNode 30.8
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.24
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 202.79
180 TestMultiControlPlane/serial/DeleteSecondaryNode 12.92
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
182 TestMultiControlPlane/serial/StopCluster 35.81
183 TestMultiControlPlane/serial/RestartCluster 63.03
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
185 TestMultiControlPlane/serial/AddSecondaryNode 76.36
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.78
190 TestJSONOutput/start/Command 59.48
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.72
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.63
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.85
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.21
215 TestKicCustomNetwork/create_custom_network 39.69
216 TestKicCustomNetwork/use_default_bridge_network 31.92
217 TestKicExistingNetwork 34.09
218 TestKicCustomSubnet 37.52
219 TestKicStaticIP 34.92
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 73.68
224 TestMountStart/serial/StartWithMountFirst 6.99
225 TestMountStart/serial/VerifyMountFirst 0.25
226 TestMountStart/serial/StartWithMountSecond 9.04
227 TestMountStart/serial/VerifyMountSecond 0.25
228 TestMountStart/serial/DeleteFirst 1.61
229 TestMountStart/serial/VerifyMountPostDelete 0.25
230 TestMountStart/serial/Stop 1.2
231 TestMountStart/serial/RestartStopped 8.17
232 TestMountStart/serial/VerifyMountPostStop 0.26
235 TestMultiNode/serial/FreshStart2Nodes 89.46
236 TestMultiNode/serial/DeployApp2Nodes 4.94
237 TestMultiNode/serial/PingHostFrom2Pods 1.04
238 TestMultiNode/serial/AddNode 30.57
239 TestMultiNode/serial/MultiNodeLabels 0.09
240 TestMultiNode/serial/ProfileList 0.32
241 TestMultiNode/serial/CopyFile 9.62
242 TestMultiNode/serial/StopNode 2.23
243 TestMultiNode/serial/StartAfterStop 9.82
244 TestMultiNode/serial/RestartKeepsNodes 86.38
245 TestMultiNode/serial/DeleteNode 5.23
246 TestMultiNode/serial/StopMultiNode 23.85
247 TestMultiNode/serial/RestartMultiNode 54.54
248 TestMultiNode/serial/ValidateNameConflict 35.18
253 TestPreload 125.95
255 TestScheduledStopUnix 107.31
258 TestInsufficientStorage 10.53
259 TestRunningBinaryUpgrade 79.2
261 TestKubernetesUpgrade 390.48
262 TestMissingContainerUpgrade 135.33
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
265 TestNoKubernetes/serial/StartWithK8s 41.07
266 TestNoKubernetes/serial/StartWithStopK8s 19.24
267 TestNoKubernetes/serial/Start 10.2
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
269 TestNoKubernetes/serial/ProfileList 1.03
270 TestNoKubernetes/serial/Stop 1.23
271 TestNoKubernetes/serial/StartNoArgs 6.74
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
273 TestStoppedBinaryUpgrade/Setup 0.64
274 TestStoppedBinaryUpgrade/Upgrade 118.8
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
284 TestPause/serial/Start 62.77
285 TestPause/serial/SecondStartNoReconfiguration 24.14
286 TestPause/serial/Pause 1
287 TestPause/serial/VerifyStatus 0.34
288 TestPause/serial/Unpause 0.91
289 TestPause/serial/PauseAgain 1.32
290 TestPause/serial/DeletePaused 3.44
291 TestPause/serial/VerifyDeletedResources 12.9
299 TestNetworkPlugins/group/false 4.98
304 TestStartStop/group/old-k8s-version/serial/FirstStart 163.01
305 TestStartStop/group/old-k8s-version/serial/DeployApp 9.56
306 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
307 TestStartStop/group/old-k8s-version/serial/Stop 12.01
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
309 TestStartStop/group/old-k8s-version/serial/SecondStart 127.92
311 TestStartStop/group/no-preload/serial/FirstStart 76.88
312 TestStartStop/group/no-preload/serial/DeployApp 9.37
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
314 TestStartStop/group/no-preload/serial/Stop 12.02
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
316 TestStartStop/group/no-preload/serial/SecondStart 266.44
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
320 TestStartStop/group/old-k8s-version/serial/Pause 3.52
322 TestStartStop/group/embed-certs/serial/FirstStart 62.16
323 TestStartStop/group/embed-certs/serial/DeployApp 8.35
324 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
325 TestStartStop/group/embed-certs/serial/Stop 11.97
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
327 TestStartStop/group/embed-certs/serial/SecondStart 268.24
328 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
329 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.12
330 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
331 TestStartStop/group/no-preload/serial/Pause 3.08
333 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.13
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
338 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.92
339 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
341 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
342 TestStartStop/group/embed-certs/serial/Pause 3.03
344 TestStartStop/group/newest-cni/serial/FirstStart 41.95
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.41
347 TestStartStop/group/newest-cni/serial/Stop 1.42
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
349 TestStartStop/group/newest-cni/serial/SecondStart 16.34
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
353 TestStartStop/group/newest-cni/serial/Pause 3.14
354 TestNetworkPlugins/group/auto/Start 64.29
355 TestNetworkPlugins/group/auto/KubeletFlags 0.29
356 TestNetworkPlugins/group/auto/NetCatPod 13.28
357 TestNetworkPlugins/group/auto/DNS 0.19
358 TestNetworkPlugins/group/auto/Localhost 0.16
359 TestNetworkPlugins/group/auto/HairPin 0.16
360 TestNetworkPlugins/group/kindnet/Start 62.39
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
363 TestNetworkPlugins/group/kindnet/NetCatPod 12.23
364 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
365 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
366 TestNetworkPlugins/group/kindnet/DNS 0.17
367 TestNetworkPlugins/group/kindnet/Localhost 0.16
368 TestNetworkPlugins/group/kindnet/HairPin 0.16
369 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
370 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.57
371 TestNetworkPlugins/group/calico/Start 74.74
372 TestNetworkPlugins/group/custom-flannel/Start 76.13
373 TestNetworkPlugins/group/calico/ControllerPod 6.01
374 TestNetworkPlugins/group/calico/KubeletFlags 0.31
375 TestNetworkPlugins/group/calico/NetCatPod 11.24
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
378 TestNetworkPlugins/group/calico/DNS 0.24
379 TestNetworkPlugins/group/calico/Localhost 0.27
380 TestNetworkPlugins/group/calico/HairPin 0.2
381 TestNetworkPlugins/group/custom-flannel/DNS 0.26
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
384 TestNetworkPlugins/group/enable-default-cni/Start 62.49
385 TestNetworkPlugins/group/flannel/Start 67.25
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.36
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
391 TestNetworkPlugins/group/flannel/ControllerPod 6.01
392 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
393 TestNetworkPlugins/group/flannel/NetCatPod 11.35
394 TestNetworkPlugins/group/bridge/Start 89.39
395 TestNetworkPlugins/group/flannel/DNS 0.29
396 TestNetworkPlugins/group/flannel/Localhost 0.2
397 TestNetworkPlugins/group/flannel/HairPin 0.2
398 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
399 TestNetworkPlugins/group/bridge/NetCatPod 10.25
400 TestNetworkPlugins/group/bridge/DNS 0.18
401 TestNetworkPlugins/group/bridge/Localhost 0.15
402 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (9.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-899574 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-899574 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.413947293s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-899574
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-899574: exit status 85 (70.592515ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-899574 | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC |          |
	|         | -p download-only-899574        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:26:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:26:42.277541  532162 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:26:42.277697  532162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:26:42.277723  532162 out.go:304] Setting ErrFile to fd 2...
	I0722 00:26:42.277743  532162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:26:42.278036  532162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	W0722 00:26:42.278197  532162 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19312-526659/.minikube/config/config.json: open /home/jenkins/minikube-integration/19312-526659/.minikube/config/config.json: no such file or directory
	I0722 00:26:42.278665  532162 out.go:298] Setting JSON to true
	I0722 00:26:42.279584  532162 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":115753,"bootTime":1721492249,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0722 00:26:42.279657  532162 start.go:139] virtualization:  
	I0722 00:26:42.282590  532162 out.go:97] [download-only-899574] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0722 00:26:42.282829  532162 notify.go:220] Checking for updates...
	W0722 00:26:42.282722  532162 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball: no such file or directory
	I0722 00:26:42.284417  532162 out.go:169] MINIKUBE_LOCATION=19312
	I0722 00:26:42.286305  532162 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:26:42.287945  532162 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	I0722 00:26:42.289635  532162 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	I0722 00:26:42.291405  532162 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0722 00:26:42.295093  532162 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 00:26:42.295418  532162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:26:42.325590  532162 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0722 00:26:42.325709  532162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:26:42.385867  532162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-22 00:26:42.37419057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:26:42.385998  532162 docker.go:307] overlay module found
	I0722 00:26:42.388781  532162 out.go:97] Using the docker driver based on user configuration
	I0722 00:26:42.388820  532162 start.go:297] selected driver: docker
	I0722 00:26:42.388834  532162 start.go:901] validating driver "docker" against <nil>
	I0722 00:26:42.388976  532162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:26:42.442761  532162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-22 00:26:42.433751001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:26:42.442929  532162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 00:26:42.443214  532162 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0722 00:26:42.443371  532162 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 00:26:42.445723  532162 out.go:169] Using Docker driver with root privileges
	I0722 00:26:42.447599  532162 cni.go:84] Creating CNI manager for ""
	I0722 00:26:42.447621  532162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0722 00:26:42.447633  532162 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 00:26:42.447734  532162 start.go:340] cluster config:
	{Name:download-only-899574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-899574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:26:42.450022  532162 out.go:97] Starting "download-only-899574" primary control-plane node in "download-only-899574" cluster
	I0722 00:26:42.450056  532162 cache.go:121] Beginning downloading kic base image for docker with crio
	I0722 00:26:42.452099  532162 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0722 00:26:42.452138  532162 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:26:42.452308  532162 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0722 00:26:42.466621  532162 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0722 00:26:42.466831  532162 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0722 00:26:42.466924  532162 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0722 00:26:42.509490  532162 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0722 00:26:42.509531  532162 cache.go:56] Caching tarball of preloaded images
	I0722 00:26:42.510324  532162 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:26:42.512832  532162 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0722 00:26:42.512856  532162 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0722 00:26:42.606583  532162 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-899574 host does not exist
	  To start a cluster, run: "minikube start -p download-only-899574"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-899574
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (6.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-182209 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-182209 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.803722442s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (6.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-182209
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-182209: exit status 85 (79.237876ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-899574 | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC |                     |
	|         | -p download-only-899574        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC | 22 Jul 24 00:26 UTC |
	| delete  | -p download-only-899574        | download-only-899574 | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC | 22 Jul 24 00:26 UTC |
	| start   | -o=json --download-only        | download-only-182209 | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC |                     |
	|         | -p download-only-182209        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:26:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:26:52.116725  532378 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:26:52.116901  532378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:26:52.116911  532378 out.go:304] Setting ErrFile to fd 2...
	I0722 00:26:52.116916  532378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:26:52.117190  532378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 00:26:52.117638  532378 out.go:298] Setting JSON to true
	I0722 00:26:52.118601  532378 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":115763,"bootTime":1721492249,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0722 00:26:52.118673  532378 start.go:139] virtualization:  
	I0722 00:26:52.120890  532378 out.go:97] [download-only-182209] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0722 00:26:52.121113  532378 notify.go:220] Checking for updates...
	I0722 00:26:52.122592  532378 out.go:169] MINIKUBE_LOCATION=19312
	I0722 00:26:52.124318  532378 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:26:52.125860  532378 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	I0722 00:26:52.127437  532378 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	I0722 00:26:52.128988  532378 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0722 00:26:52.132320  532378 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 00:26:52.132614  532378 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:26:52.151947  532378 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0722 00:26:52.152045  532378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:26:52.224821  532378 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-22 00:26:52.215547533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:26:52.224941  532378 docker.go:307] overlay module found
	I0722 00:26:52.226830  532378 out.go:97] Using the docker driver based on user configuration
	I0722 00:26:52.226856  532378 start.go:297] selected driver: docker
	I0722 00:26:52.226863  532378 start.go:901] validating driver "docker" against <nil>
	I0722 00:26:52.226981  532378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:26:52.281111  532378 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-22 00:26:52.272494327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:26:52.281287  532378 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 00:26:52.281589  532378 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0722 00:26:52.281749  532378 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 00:26:52.283662  532378 out.go:169] Using Docker driver with root privileges
	I0722 00:26:52.285198  532378 cni.go:84] Creating CNI manager for ""
	I0722 00:26:52.285217  532378 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0722 00:26:52.285231  532378 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 00:26:52.285314  532378 start.go:340] cluster config:
	{Name:download-only-182209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-182209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:26:52.286926  532378 out.go:97] Starting "download-only-182209" primary control-plane node in "download-only-182209" cluster
	I0722 00:26:52.286950  532378 cache.go:121] Beginning downloading kic base image for docker with crio
	I0722 00:26:52.288651  532378 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0722 00:26:52.288685  532378 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:26:52.288904  532378 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0722 00:26:52.310840  532378 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0722 00:26:52.310951  532378 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0722 00:26:52.310976  532378 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0722 00:26:52.310985  532378 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0722 00:26:52.310993  532378 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0722 00:26:52.357134  532378 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0722 00:26:52.357160  532378 cache.go:56] Caching tarball of preloaded images
	I0722 00:26:52.357919  532378 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:26:52.360033  532378 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0722 00:26:52.360067  532378 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 ...
	I0722 00:26:52.455737  532378 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:bace9a3612be7d31e4d3c3d446951ced -> /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-182209 host does not exist
	  To start a cluster, run: "minikube start -p download-only-182209"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-182209
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (8.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-177991 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-177991 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.867774773s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (8.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-177991
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-177991: exit status 85 (393.072265ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-899574 | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC |                     |
	|         | -p download-only-899574             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC | 22 Jul 24 00:26 UTC |
	| delete  | -p download-only-899574             | download-only-899574 | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC | 22 Jul 24 00:26 UTC |
	| start   | -o=json --download-only             | download-only-182209 | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC |                     |
	|         | -p download-only-182209             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC | 22 Jul 24 00:26 UTC |
	| delete  | -p download-only-182209             | download-only-182209 | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC | 22 Jul 24 00:26 UTC |
	| start   | -o=json --download-only             | download-only-177991 | jenkins | v1.33.1 | 22 Jul 24 00:26 UTC |                     |
	|         | -p download-only-177991             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:26:59
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:26:59.323371  532585 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:26:59.323529  532585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:26:59.323538  532585 out.go:304] Setting ErrFile to fd 2...
	I0722 00:26:59.323544  532585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:26:59.323800  532585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 00:26:59.324188  532585 out.go:298] Setting JSON to true
	I0722 00:26:59.325113  532585 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":115770,"bootTime":1721492249,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0722 00:26:59.325187  532585 start.go:139] virtualization:  
	I0722 00:26:59.328450  532585 out.go:97] [download-only-177991] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0722 00:26:59.328663  532585 notify.go:220] Checking for updates...
	I0722 00:26:59.331353  532585 out.go:169] MINIKUBE_LOCATION=19312
	I0722 00:26:59.334129  532585 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:26:59.336816  532585 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	I0722 00:26:59.339477  532585 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	I0722 00:26:59.342055  532585 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0722 00:26:59.347054  532585 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 00:26:59.347354  532585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:26:59.379493  532585 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0722 00:26:59.379586  532585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:26:59.434265  532585 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-22 00:26:59.424846744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:26:59.434383  532585 docker.go:307] overlay module found
	I0722 00:26:59.437093  532585 out.go:97] Using the docker driver based on user configuration
	I0722 00:26:59.437123  532585 start.go:297] selected driver: docker
	I0722 00:26:59.437129  532585 start.go:901] validating driver "docker" against <nil>
	I0722 00:26:59.437238  532585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:26:59.492628  532585 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-22 00:26:59.483500957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:26:59.492856  532585 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 00:26:59.493141  532585 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0722 00:26:59.493297  532585 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 00:26:59.496141  532585 out.go:169] Using Docker driver with root privileges
	I0722 00:26:59.498641  532585 cni.go:84] Creating CNI manager for ""
	I0722 00:26:59.498659  532585 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0722 00:26:59.498671  532585 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 00:26:59.498776  532585 start.go:340] cluster config:
	{Name:download-only-177991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-177991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:26:59.501532  532585 out.go:97] Starting "download-only-177991" primary control-plane node in "download-only-177991" cluster
	I0722 00:26:59.501559  532585 cache.go:121] Beginning downloading kic base image for docker with crio
	I0722 00:26:59.504103  532585 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0722 00:26:59.504128  532585 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:26:59.504294  532585 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0722 00:26:59.519166  532585 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0722 00:26:59.519315  532585 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0722 00:26:59.519340  532585 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0722 00:26:59.519345  532585 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0722 00:26:59.519357  532585 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0722 00:26:59.563033  532585 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0722 00:26:59.563060  532585 cache.go:56] Caching tarball of preloaded images
	I0722 00:26:59.563980  532585 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:26:59.566916  532585 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0722 00:26:59.566957  532585 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0722 00:26:59.697218  532585 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:70b5971c257ae4defe1f5d041a04e29c -> /home/jenkins/minikube-integration/19312-526659/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-177991 host does not exist
	  To start a cluster, run: "minikube start -p download-only-177991"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-177991
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-175978 --alsologtostderr --binary-mirror http://127.0.0.1:32849 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-175978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-175978
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-783853
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-783853: exit status 85 (74.067185ms)

                                                
                                                
-- stdout --
	* Profile "addons-783853" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-783853"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-783853
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-783853: exit status 85 (68.543849ms)

                                                
                                                
-- stdout --
	* Profile "addons-783853" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-783853"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (237.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-783853 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-783853 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m57.416857619s)
--- PASS: TestAddons/Setup (237.42s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 46.526684ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-m9wqh" [d562888d-bd3c-4b3f-9adc-aea340501248] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005611817s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qs2hs" [c0bbfdb6-7c30-4635-b7e5-b3509185506d] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004446678s
addons_test.go:342: (dbg) Run:  kubectl --context addons-783853 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-783853 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-783853 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.632314808s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 ip
2024/07/22 00:31:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wcvmx" [65d3f75f-9542-4a64-84c3-82b5a58e99a9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003632999s
addons_test.go:843: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-783853
addons_test.go:843: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-783853: (5.766803982s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 8.507761ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-783853 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-783853 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [60b0069f-b4d8-4265-8ae7-d6ed67b53dae] Pending
helpers_test.go:344: "task-pv-pod" [60b0069f-b4d8-4265-8ae7-d6ed67b53dae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [60b0069f-b4d8-4265-8ae7-d6ed67b53dae] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00314183s
addons_test.go:586: (dbg) Run:  kubectl --context addons-783853 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-783853 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-783853 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-783853 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-783853 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-783853 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-783853 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6126aa04-75e0-403a-a5ff-ad55c0b7ad89] Pending
helpers_test.go:344: "task-pv-pod-restore" [6126aa04-75e0-403a-a5ff-ad55c0b7ad89] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6126aa04-75e0-403a-a5ff-ad55c0b7ad89] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00343555s
addons_test.go:628: (dbg) Run:  kubectl --context addons-783853 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-783853 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-783853 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-arm64 -p addons-783853 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.775443637s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-783853 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-hbj4f" [7c150578-a675-4241-bf2c-774c14d0b8ed] Pending
helpers_test.go:344: "headlamp-7867546754-hbj4f" [7c150578-a675-4241-bf2c-774c14d0b8ed] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-hbj4f" [7c150578-a675-4241-bf2c-774c14d0b8ed] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003551408s
--- PASS: TestAddons/parallel/Headlamp (11.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-cjbtj" [9c43fd64-06b3-44fa-a8e0-2d8acaf6ad75] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003776571s
addons_test.go:862: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-783853
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-783853 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-783853 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-783853 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ac12ef42-71d2-4ccc-9b81-4d88bda23f44] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ac12ef42-71d2-4ccc-9b81-4d88bda23f44] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ac12ef42-71d2-4ccc-9b81-4d88bda23f44] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003008443s
addons_test.go:992: (dbg) Run:  kubectl --context addons-783853 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 ssh "cat /opt/local-path-provisioner/pvc-a10fb3fc-c913-4254-9002-57f08ecaf0f2_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-783853 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-783853 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-arm64 -p addons-783853 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-arm64 -p addons-783853 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.339461054s)
--- PASS: TestAddons/parallel/LocalPath (52.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jwvh7" [03f22a4c-c638-40a2-8a03-0b0770a62063] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008374548s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-783853
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-7hmg4" [e1924b05-b629-4952-b668-a8ee26e20181] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003693271s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-783853 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-783853 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-783853
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-783853: (11.950165294s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-783853
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-783853
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-783853
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (35.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-299898 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-299898 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.366700854s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-299898 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-299898 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-299898 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-299898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-299898
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-299898: (1.990091811s)
--- PASS: TestCertOptions (35.03s)

                                                
                                    
x
+
TestCertExpiration (281.89s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-847622 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-847622 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.853790397s)
E0722 01:21:01.799075  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 01:21:08.197985  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-847622 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-847622 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (56.165691258s)
helpers_test.go:175: Cleaning up "cert-expiration-847622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-847622
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-847622: (2.873553194s)
--- PASS: TestCertExpiration (281.89s)

                                                
                                    
x
+
TestForceSystemdFlag (40.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-975128 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-975128 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.234839166s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-975128 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-975128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-975128
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-975128: (2.442561842s)
--- PASS: TestForceSystemdFlag (40.02s)

                                                
                                    
x
+
TestForceSystemdEnv (42.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-128665 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-128665 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.04560225s)
helpers_test.go:175: Cleaning up "force-systemd-env-128665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-128665
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-128665: (2.58434892s)
--- PASS: TestForceSystemdEnv (42.63s)

                                                
                                    
x
+
TestErrorSpam/setup (31.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-529949 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-529949 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-529949 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-529949 --driver=docker  --container-runtime=crio: (31.100654614s)
--- PASS: TestErrorSpam/setup (31.10s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 pause
--- PASS: TestErrorSpam/pause (1.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 stop: (1.235182757s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-529949 --log_dir /tmp/nospam-529949 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19312-526659/.minikube/files/etc/test/nested/copy/532157/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-464385 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-464385 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m2.879665086s)
--- PASS: TestFunctional/serial/StartWithProxy (62.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-464385 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-464385 --alsologtostderr -v=8: (30.090809432s)
functional_test.go:659: soft start took 30.09595116s for "functional-464385" cluster.
--- PASS: TestFunctional/serial/SoftStart (30.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-464385 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-464385 cache add registry.k8s.io/pause:3.1: (1.448402977s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-464385 cache add registry.k8s.io/pause:3.3: (1.414028307s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-464385 cache add registry.k8s.io/pause:latest: (1.462395535s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-464385 /tmp/TestFunctionalserialCacheCmdcacheadd_local3630546516/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 cache add minikube-local-cache-test:functional-464385
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 cache delete minikube-local-cache-test:functional-464385
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-464385
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-464385 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.220034ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-464385 cache reload: (1.177083318s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 kubectl -- --context functional-464385 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-464385 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-464385 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-464385 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.798289988s)
functional_test.go:757: restart took 40.798414338s for "functional-464385" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-464385 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-464385 logs: (1.668913717s)
--- PASS: TestFunctional/serial/LogsCmd (1.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 logs --file /tmp/TestFunctionalserialLogsFileCmd1225184565/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-464385 logs --file /tmp/TestFunctionalserialLogsFileCmd1225184565/001/logs.txt: (1.707589587s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-464385 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-464385
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-464385: exit status 115 (584.918586ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31434 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-464385 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-464385 config get cpus: exit status 14 (73.550001ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-464385 config get cpus: exit status 14 (63.696885ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-464385 --alsologtostderr -v=1]
2024/07/22 00:42:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-464385 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 560519: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.75s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-464385 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-464385 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (167.068161ms)

                                                
                                                
-- stdout --
	* [functional-464385] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:42:03.453342  560281 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:42:03.453488  560281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:03.453511  560281 out.go:304] Setting ErrFile to fd 2...
	I0722 00:42:03.453527  560281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:03.453803  560281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 00:42:03.454218  560281 out.go:298] Setting JSON to false
	I0722 00:42:03.455215  560281 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116674,"bootTime":1721492249,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0722 00:42:03.455284  560281 start.go:139] virtualization:  
	I0722 00:42:03.457613  560281 out.go:177] * [functional-464385] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0722 00:42:03.459401  560281 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:42:03.459456  560281 notify.go:220] Checking for updates...
	I0722 00:42:03.463638  560281 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:42:03.465538  560281 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	I0722 00:42:03.467242  560281 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	I0722 00:42:03.468959  560281 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0722 00:42:03.470550  560281 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:42:03.472860  560281 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:42:03.473436  560281 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:42:03.500391  560281 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0722 00:42:03.500520  560281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:42:03.558814  560281 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-22 00:42:03.549777795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:42:03.558918  560281 docker.go:307] overlay module found
	I0722 00:42:03.560927  560281 out.go:177] * Using the docker driver based on existing profile
	I0722 00:42:03.562770  560281 start.go:297] selected driver: docker
	I0722 00:42:03.562793  560281 start.go:901] validating driver "docker" against &{Name:functional-464385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-464385 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:42:03.562917  560281 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:42:03.565173  560281 out.go:177] 
	W0722 00:42:03.567020  560281 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0722 00:42:03.568792  560281 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-464385 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-464385 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-464385 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (180.284421ms)

                                                
                                                
-- stdout --
	* [functional-464385] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:42:03.285401  560238 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:42:03.285578  560238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:03.285589  560238 out.go:304] Setting ErrFile to fd 2...
	I0722 00:42:03.285595  560238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:03.285945  560238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 00:42:03.286305  560238 out.go:298] Setting JSON to false
	I0722 00:42:03.287333  560238 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":116674,"bootTime":1721492249,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0722 00:42:03.287423  560238 start.go:139] virtualization:  
	I0722 00:42:03.290351  560238 out.go:177] * [functional-464385] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0722 00:42:03.292470  560238 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:42:03.292666  560238 notify.go:220] Checking for updates...
	I0722 00:42:03.296240  560238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:42:03.297941  560238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	I0722 00:42:03.299702  560238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	I0722 00:42:03.301269  560238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0722 00:42:03.303820  560238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:42:03.306321  560238 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:42:03.306875  560238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:42:03.334786  560238 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0722 00:42:03.334897  560238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:42:03.392467  560238 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-22 00:42:03.382216073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:42:03.392581  560238 docker.go:307] overlay module found
	I0722 00:42:03.394616  560238 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0722 00:42:03.396428  560238 start.go:297] selected driver: docker
	I0722 00:42:03.396451  560238 start.go:901] validating driver "docker" against &{Name:functional-464385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-464385 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:42:03.396571  560238 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:42:03.398927  560238 out.go:177] 
	W0722 00:42:03.400428  560238 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0722 00:42:03.402258  560238 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-464385 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-464385 expose deployment hello-node-connect --type=NodePort --port=8080
E0722 00:41:10.759470  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-5slm5" [e3992722-a574-447b-b6af-59d165d845b4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-5slm5" [e3992722-a574-447b-b6af-59d165d845b4] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.003941297s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32722
functional_test.go:1671: http://192.168.49.2:32722: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-5slm5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32722
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (14.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh -n functional-464385 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 cp functional-464385:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1131210199/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh -n functional-464385 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh -n functional-464385 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/532157/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "sudo cat /etc/test/nested/copy/532157/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/532157.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "sudo cat /etc/ssl/certs/532157.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/532157.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "sudo cat /usr/share/ca-certificates/532157.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/5321572.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "sudo cat /etc/ssl/certs/5321572.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/5321572.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "sudo cat /usr/share/ca-certificates/5321572.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-464385 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-464385 ssh "sudo systemctl is-active docker": exit status 1 (308.29873ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-464385 ssh "sudo systemctl is-active containerd": exit status 1 (344.997534ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-464385 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-464385 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-464385 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 557085: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-464385 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-464385 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-464385 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d173257d-8057-4ff8-902a-a9bbc2bae12b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d173257d-8057-4ff8-902a-a9bbc2bae12b] Running
E0722 00:41:08.199866  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
E0722 00:41:08.205708  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
E0722 00:41:08.215962  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004650904s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-464385 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.39.223 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-464385 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-464385 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-464385 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-j7kzk" [d55cd76e-ffb3-4eca-b1aa-5eefb23ee672] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-j7kzk" [d55cd76e-ffb3-4eca-b1aa-5eefb23ee672] Running
E0722 00:41:28.681526  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003991967s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 service list -o json
functional_test.go:1490: Took "504.136542ms" to run "out/minikube-linux-arm64 -p functional-464385 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31425
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31425
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "317.486894ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "64.987537ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "320.613974ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "53.382177ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (22.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-464385 /tmp/TestFunctionalparallelMountCmdany-port2201407984/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721608895731445674" to /tmp/TestFunctionalparallelMountCmdany-port2201407984/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721608895731445674" to /tmp/TestFunctionalparallelMountCmdany-port2201407984/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721608895731445674" to /tmp/TestFunctionalparallelMountCmdany-port2201407984/001/test-1721608895731445674
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-464385 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.190744ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 22 00:41 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 22 00:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 22 00:41 test-1721608895731445674
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh cat /mount-9p/test-1721608895731445674
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-464385 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [88b1d1ae-967f-4381-8ce9-16799653e0ce] Pending
helpers_test.go:344: "busybox-mount" [88b1d1ae-967f-4381-8ce9-16799653e0ce] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0722 00:41:49.162220  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [88b1d1ae-967f-4381-8ce9-16799653e0ce] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [88b1d1ae-967f-4381-8ce9-16799653e0ce] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 20.003204368s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-464385 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-464385 /tmp/TestFunctionalparallelMountCmdany-port2201407984/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (22.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-464385 /tmp/TestFunctionalparallelMountCmdspecific-port582776049/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-464385 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (304.062327ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-464385 /tmp/TestFunctionalparallelMountCmdspecific-port582776049/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-464385 ssh "sudo umount -f /mount-9p": exit status 1 (425.111796ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-464385 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-464385 /tmp/TestFunctionalparallelMountCmdspecific-port582776049/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-464385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup43199834/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-464385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup43199834/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-464385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup43199834/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-464385 ssh "findmnt -T" /mount1: exit status 1 (510.620827ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-464385 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-464385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup43199834/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-464385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup43199834/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-464385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup43199834/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-464385 version -o=json --components: (1.012876649s)
--- PASS: TestFunctional/parallel/Version/components (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-464385 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240719-e7903573
docker.io/kindest/kindnetd:v20240715-585640e9
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-464385 image ls --format short --alsologtostderr:
I0722 00:42:19.312277  561842 out.go:291] Setting OutFile to fd 1 ...
I0722 00:42:19.312459  561842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:42:19.312470  561842 out.go:304] Setting ErrFile to fd 2...
I0722 00:42:19.312476  561842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:42:19.312778  561842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
I0722 00:42:19.313471  561842 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 00:42:19.313623  561842 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 00:42:19.314153  561842 cli_runner.go:164] Run: docker container inspect functional-464385 --format={{.State.Status}}
I0722 00:42:19.330906  561842 ssh_runner.go:195] Run: systemctl --version
I0722 00:42:19.330977  561842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-464385
I0722 00:42:19.346783  561842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38991 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/functional-464385/id_rsa Username:docker}
I0722 00:42:19.432981  561842 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-464385 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5e32961ddcea3 | 90.3MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/my-image                      | functional-464385  | c7eb7c6f07afb | 1.64MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 2351f570ed0ea | 89.2MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | 5461b18aaccf3 | 46.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 61773190d42ff | 114MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | d48f992a22722 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20240719-e7903573 | f42786f8afd22 | 90.3MB |
| docker.io/library/nginx                 | latest             | 443d199e8bfcc | 197MB  |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 8e97cdb19e7cc | 108MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-464385 image ls --format table --alsologtostderr:
I0722 00:42:22.751697  562228 out.go:291] Setting OutFile to fd 1 ...
I0722 00:42:22.752041  562228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:42:22.752079  562228 out.go:304] Setting ErrFile to fd 2...
I0722 00:42:22.752099  562228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:42:22.752417  562228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
I0722 00:42:22.753146  562228 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 00:42:22.753344  562228 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 00:42:22.753933  562228 cli_runner.go:164] Run: docker container inspect functional-464385 --format={{.State.Status}}
I0722 00:42:22.771659  562228 ssh_runner.go:195] Run: systemctl --version
I0722 00:42:22.771710  562228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-464385
I0722 00:42:22.788847  562228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38991 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/functional-464385/id_rsa Username:docker}
I0722 00:42:22.877002  562228 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-464385 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":["docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55","docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62"],"repoTags":["docker.io/library/nginx:alpine"],"size":"46671377"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":
["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":["registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"113538528"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4","registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf28080ae6bd0626bd2c444edda987d7b95"],"repoTags":["registry.k
8s.io/kube-scheduler:v1.30.3"],"size":"61568326"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493","docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"90278450"},{"id":"443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df","docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e"],"repoTags":["docker.io/library/nginx:latest"],"size":"197104786"},{"id":
"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"
repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"f42786f8afd2214fc5
9fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800","repoDigests":["docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a","docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"],"repoTags":["docker.io/kindest/kindnetd:v20240719-e7903573"],"size":"90281007"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"4c21f628e28594b2c2e38af4226f4864f469042f786901f6b434ca2d2bfbfd87","repoDigests":["docker.io/library/d902f7489944e17380d3e83e576600ce445834d7fb619d5b357a0bb561fa37fc-tmp@sha256:6f8cce96d9da009f49a96e3a7d6cd7177afa2e451df45338e608a2414d4b5421"],"repoTags":[],"size":"1637644"},{"id":"c7eb7c6f07afbac0b6b4ca6c02ff6b3472ab8fc10
69b88833867bd64ddda36a6","repoDigests":["localhost/my-image@sha256:0f033101be921c245f04ab752d433c2fb2779b5349d0cc551f706b70bc32c334"],"repoTags":["localhost/my-image:functional-464385"],"size":"1640225"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"108229958"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":["registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b889edf456303746e750129e4d7f20699f23e9a31acc","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"89199511"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5b
f38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-464385 image ls --format json --alsologtostderr:
I0722 00:42:22.521345  562196 out.go:291] Setting OutFile to fd 1 ...
I0722 00:42:22.521566  562196 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:42:22.521598  562196 out.go:304] Setting ErrFile to fd 2...
I0722 00:42:22.521623  562196 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:42:22.521901  562196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
I0722 00:42:22.522604  562196 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 00:42:22.522779  562196 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 00:42:22.523297  562196 cli_runner.go:164] Run: docker container inspect functional-464385 --format={{.State.Status}}
I0722 00:42:22.541295  562196 ssh_runner.go:195] Run: systemctl --version
I0722 00:42:22.541345  562196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-464385
I0722 00:42:22.559962  562196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38991 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/functional-464385/id_rsa Username:docker}
I0722 00:42:22.649301  562196 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-464385 image ls --format yaml --alsologtostderr:
- id: 4c21f628e28594b2c2e38af4226f4864f469042f786901f6b434ca2d2bfbfd87
repoDigests:
- docker.io/library/d902f7489944e17380d3e83e576600ce445834d7fb619d5b357a0bb561fa37fc-tmp@sha256:6f8cce96d9da009f49a96e3a7d6cd7177afa2e451df45338e608a2414d4b5421
repoTags: []
size: "1637644"
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests:
- docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55
- docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62
repoTags:
- docker.io/library/nginx:alpine
size: "46671377"
- id: 443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e
repoTags:
- docker.io/library/nginx:latest
size: "197104786"
- id: c7eb7c6f07afbac0b6b4ca6c02ff6b3472ab8fc1069b88833867bd64ddda36a6
repoDigests:
- localhost/my-image@sha256:0f033101be921c245f04ab752d433c2fb2779b5349d0cc551f706b70bc32c334
repoTags:
- localhost/my-image:functional-464385
size: "1640225"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
- registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf28080ae6bd0626bd2c444edda987d7b95
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "61568326"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1634527"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "108229958"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
- docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "90278450"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "113538528"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800
repoDigests:
- docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a
- docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a
repoTags:
- docker.io/kindest/kindnetd:v20240719-e7903573
size: "90281007"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b889edf456303746e750129e4d7f20699f23e9a31acc
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "89199511"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-464385 image ls --format yaml --alsologtostderr:
I0722 00:42:22.290153  562164 out.go:291] Setting OutFile to fd 1 ...
I0722 00:42:22.290308  562164 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:42:22.290317  562164 out.go:304] Setting ErrFile to fd 2...
I0722 00:42:22.290322  562164 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:42:22.290586  562164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
I0722 00:42:22.291286  562164 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 00:42:22.291413  562164 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 00:42:22.291895  562164 cli_runner.go:164] Run: docker container inspect functional-464385 --format={{.State.Status}}
I0722 00:42:22.309737  562164 ssh_runner.go:195] Run: systemctl --version
I0722 00:42:22.309795  562164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-464385
I0722 00:42:22.327052  562164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38991 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/functional-464385/id_rsa Username:docker}
I0722 00:42:22.413278  562164 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-464385 ssh pgrep buildkitd: exit status 1 (245.092108ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image build -t localhost/my-image:functional-464385 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-464385 image build -t localhost/my-image:functional-464385 testdata/build --alsologtostderr: (2.114223837s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-464385 image build -t localhost/my-image:functional-464385 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4c21f628e28
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-464385
--> c7eb7c6f07a
Successfully tagged localhost/my-image:functional-464385
c7eb7c6f07afbac0b6b4ca6c02ff6b3472ab8fc1069b88833867bd64ddda36a6
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-464385 image build -t localhost/my-image:functional-464385 testdata/build --alsologtostderr:
I0722 00:42:19.914003  561970 out.go:291] Setting OutFile to fd 1 ...
I0722 00:42:19.915166  561970 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:42:19.915179  561970 out.go:304] Setting ErrFile to fd 2...
I0722 00:42:19.915184  561970 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:42:19.915421  561970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
I0722 00:42:19.916018  561970 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 00:42:19.917507  561970 config.go:182] Loaded profile config "functional-464385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 00:42:19.918128  561970 cli_runner.go:164] Run: docker container inspect functional-464385 --format={{.State.Status}}
I0722 00:42:19.934837  561970 ssh_runner.go:195] Run: systemctl --version
I0722 00:42:19.934907  561970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-464385
I0722 00:42:19.950655  561970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38991 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/functional-464385/id_rsa Username:docker}
I0722 00:42:20.037552  561970 build_images.go:161] Building image from path: /tmp/build.3168090987.tar
I0722 00:42:20.037633  561970 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0722 00:42:20.046933  561970 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3168090987.tar
I0722 00:42:20.050888  561970 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3168090987.tar: stat -c "%s %y" /var/lib/minikube/build/build.3168090987.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3168090987.tar': No such file or directory
I0722 00:42:20.050916  561970 ssh_runner.go:362] scp /tmp/build.3168090987.tar --> /var/lib/minikube/build/build.3168090987.tar (3072 bytes)
I0722 00:42:20.083255  561970 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3168090987
I0722 00:42:20.092774  561970 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3168090987 -xf /var/lib/minikube/build/build.3168090987.tar
I0722 00:42:20.103265  561970 crio.go:315] Building image: /var/lib/minikube/build/build.3168090987
I0722 00:42:20.103339  561970 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-464385 /var/lib/minikube/build/build.3168090987 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0722 00:42:21.957668  561970 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-464385 /var/lib/minikube/build/build.3168090987 --cgroup-manager=cgroupfs: (1.854282602s)
I0722 00:42:21.957732  561970 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3168090987
I0722 00:42:21.966726  561970 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3168090987.tar
I0722 00:42:21.975502  561970 build_images.go:217] Built localhost/my-image:functional-464385 from /tmp/build.3168090987.tar
I0722 00:42:21.975570  561970 build_images.go:133] succeeded building to: functional-464385
I0722 00:42:21.975581  561970 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-464385
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image rm kicbase/echo-server:functional-464385 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi kicbase/echo-server:functional-464385
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 image save --daemon kicbase/echo-server:functional-464385 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect kicbase/echo-server:functional-464385
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 update-context --alsologtostderr -v=2
E0722 00:42:30.123187  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
E0722 00:43:52.043436  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-464385 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-464385
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-464385
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-464385
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (191.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-107153 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0722 00:46:01.799870  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:01.805063  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:01.815923  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:01.836586  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:01.876789  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:01.957203  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:02.117604  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:02.438399  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:03.078622  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:04.359502  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:06.920172  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:08.197791  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
E0722 00:46:12.040718  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:22.281464  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:46:35.884220  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
E0722 00:46:42.761960  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:47:23.723049  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-107153 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m10.473973388s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (191.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-107153 -- rollout status deployment/busybox: (7.054428943s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-jsdvh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-q64n4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-qp4ks -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-jsdvh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-q64n4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-qp4ks -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-jsdvh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-q64n4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-qp4ks -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-jsdvh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-jsdvh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-q64n4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-q64n4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-qp4ks -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107153 -- exec busybox-fc5497c4f-qp4ks -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-107153 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-107153 -v=7 --alsologtostderr: (35.320318921s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-107153 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp testdata/cp-test.txt ha-107153:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2839622490/001/cp-test_ha-107153.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153:/home/docker/cp-test.txt ha-107153-m02:/home/docker/cp-test_ha-107153_ha-107153-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m02 "sudo cat /home/docker/cp-test_ha-107153_ha-107153-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153:/home/docker/cp-test.txt ha-107153-m03:/home/docker/cp-test_ha-107153_ha-107153-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m03 "sudo cat /home/docker/cp-test_ha-107153_ha-107153-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153:/home/docker/cp-test.txt ha-107153-m04:/home/docker/cp-test_ha-107153_ha-107153-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m04 "sudo cat /home/docker/cp-test_ha-107153_ha-107153-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp testdata/cp-test.txt ha-107153-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2839622490/001/cp-test_ha-107153-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m02:/home/docker/cp-test.txt ha-107153:/home/docker/cp-test_ha-107153-m02_ha-107153.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153 "sudo cat /home/docker/cp-test_ha-107153-m02_ha-107153.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m02:/home/docker/cp-test.txt ha-107153-m03:/home/docker/cp-test_ha-107153-m02_ha-107153-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m03 "sudo cat /home/docker/cp-test_ha-107153-m02_ha-107153-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m02:/home/docker/cp-test.txt ha-107153-m04:/home/docker/cp-test_ha-107153-m02_ha-107153-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m04 "sudo cat /home/docker/cp-test_ha-107153-m02_ha-107153-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp testdata/cp-test.txt ha-107153-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2839622490/001/cp-test_ha-107153-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m03:/home/docker/cp-test.txt ha-107153:/home/docker/cp-test_ha-107153-m03_ha-107153.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153 "sudo cat /home/docker/cp-test_ha-107153-m03_ha-107153.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m03:/home/docker/cp-test.txt ha-107153-m02:/home/docker/cp-test_ha-107153-m03_ha-107153-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m02 "sudo cat /home/docker/cp-test_ha-107153-m03_ha-107153-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m03:/home/docker/cp-test.txt ha-107153-m04:/home/docker/cp-test_ha-107153-m03_ha-107153-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m04 "sudo cat /home/docker/cp-test_ha-107153-m03_ha-107153-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp testdata/cp-test.txt ha-107153-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2839622490/001/cp-test_ha-107153-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m04:/home/docker/cp-test.txt ha-107153:/home/docker/cp-test_ha-107153-m04_ha-107153.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153 "sudo cat /home/docker/cp-test_ha-107153-m04_ha-107153.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m04:/home/docker/cp-test.txt ha-107153-m02:/home/docker/cp-test_ha-107153-m04_ha-107153-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m04 "sudo cat /home/docker/cp-test.txt"
E0722 00:48:45.644186  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m02 "sudo cat /home/docker/cp-test_ha-107153-m04_ha-107153-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 cp ha-107153-m04:/home/docker/cp-test.txt ha-107153-m03:/home/docker/cp-test_ha-107153-m04_ha-107153-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 ssh -n ha-107153-m03 "sudo cat /home/docker/cp-test_ha-107153-m04_ha-107153-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-107153 node stop m02 -v=7 --alsologtostderr: (11.978368295s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-107153 status -v=7 --alsologtostderr: exit status 7 (724.35833ms)

                                                
                                                
-- stdout --
	ha-107153
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-107153-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107153-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-107153-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:48:59.286668  578685 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:48:59.286859  578685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:48:59.286889  578685 out.go:304] Setting ErrFile to fd 2...
	I0722 00:48:59.286909  578685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:48:59.287197  578685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 00:48:59.287409  578685 out.go:298] Setting JSON to false
	I0722 00:48:59.287471  578685 mustload.go:65] Loading cluster: ha-107153
	I0722 00:48:59.287564  578685 notify.go:220] Checking for updates...
	I0722 00:48:59.288001  578685 config.go:182] Loaded profile config "ha-107153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:48:59.288043  578685 status.go:255] checking status of ha-107153 ...
	I0722 00:48:59.288584  578685 cli_runner.go:164] Run: docker container inspect ha-107153 --format={{.State.Status}}
	I0722 00:48:59.308666  578685 status.go:330] ha-107153 host status = "Running" (err=<nil>)
	I0722 00:48:59.308688  578685 host.go:66] Checking if "ha-107153" exists ...
	I0722 00:48:59.309104  578685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107153
	I0722 00:48:59.342488  578685 host.go:66] Checking if "ha-107153" exists ...
	I0722 00:48:59.342777  578685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 00:48:59.342821  578685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107153
	I0722 00:48:59.365560  578685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38996 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/ha-107153/id_rsa Username:docker}
	I0722 00:48:59.454114  578685 ssh_runner.go:195] Run: systemctl --version
	I0722 00:48:59.458490  578685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:48:59.470591  578685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 00:48:59.546207  578685 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-07-22 00:48:59.53657857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 00:48:59.546842  578685 kubeconfig.go:125] found "ha-107153" server: "https://192.168.49.254:8443"
	I0722 00:48:59.546876  578685 api_server.go:166] Checking apiserver status ...
	I0722 00:48:59.546924  578685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:48:59.558166  578685 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1430/cgroup
	I0722 00:48:59.568670  578685 api_server.go:182] apiserver freezer: "6:freezer:/docker/c31ad1c3cda1bd04f1657c673786543b79b03e7349f5333ab82af02b3324fa9e/crio/crio-a26ad45abc1f1df3b8ca0bd3c24b0d0bc330464cb8ac76e70b2fd73c3cd18200"
	I0722 00:48:59.568827  578685 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c31ad1c3cda1bd04f1657c673786543b79b03e7349f5333ab82af02b3324fa9e/crio/crio-a26ad45abc1f1df3b8ca0bd3c24b0d0bc330464cb8ac76e70b2fd73c3cd18200/freezer.state
	I0722 00:48:59.577924  578685 api_server.go:204] freezer state: "THAWED"
	I0722 00:48:59.577952  578685 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0722 00:48:59.585920  578685 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0722 00:48:59.585949  578685 status.go:422] ha-107153 apiserver status = Running (err=<nil>)
	I0722 00:48:59.585961  578685 status.go:257] ha-107153 status: &{Name:ha-107153 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 00:48:59.585978  578685 status.go:255] checking status of ha-107153-m02 ...
	I0722 00:48:59.586289  578685 cli_runner.go:164] Run: docker container inspect ha-107153-m02 --format={{.State.Status}}
	I0722 00:48:59.602541  578685 status.go:330] ha-107153-m02 host status = "Stopped" (err=<nil>)
	I0722 00:48:59.602565  578685 status.go:343] host is not running, skipping remaining checks
	I0722 00:48:59.602587  578685 status.go:257] ha-107153-m02 status: &{Name:ha-107153-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 00:48:59.602609  578685 status.go:255] checking status of ha-107153-m03 ...
	I0722 00:48:59.603042  578685 cli_runner.go:164] Run: docker container inspect ha-107153-m03 --format={{.State.Status}}
	I0722 00:48:59.620199  578685 status.go:330] ha-107153-m03 host status = "Running" (err=<nil>)
	I0722 00:48:59.620225  578685 host.go:66] Checking if "ha-107153-m03" exists ...
	I0722 00:48:59.620517  578685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107153-m03
	I0722 00:48:59.640166  578685 host.go:66] Checking if "ha-107153-m03" exists ...
	I0722 00:48:59.640574  578685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 00:48:59.640643  578685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107153-m03
	I0722 00:48:59.657821  578685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39006 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/ha-107153-m03/id_rsa Username:docker}
	I0722 00:48:59.746004  578685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:48:59.757684  578685 kubeconfig.go:125] found "ha-107153" server: "https://192.168.49.254:8443"
	I0722 00:48:59.757713  578685 api_server.go:166] Checking apiserver status ...
	I0722 00:48:59.757758  578685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:48:59.769925  578685 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1406/cgroup
	I0722 00:48:59.779643  578685 api_server.go:182] apiserver freezer: "6:freezer:/docker/1fff7535d089d95fddaf0b833a2f8b99cbbb4df16a69a59a8994ac3647f469e9/crio/crio-08c5539e3528cf26bf267686c5b597f810aa68cfa215911ffdbc2abea8e8faed"
	I0722 00:48:59.779715  578685 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1fff7535d089d95fddaf0b833a2f8b99cbbb4df16a69a59a8994ac3647f469e9/crio/crio-08c5539e3528cf26bf267686c5b597f810aa68cfa215911ffdbc2abea8e8faed/freezer.state
	I0722 00:48:59.788776  578685 api_server.go:204] freezer state: "THAWED"
	I0722 00:48:59.788804  578685 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0722 00:48:59.796822  578685 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0722 00:48:59.796901  578685 status.go:422] ha-107153-m03 apiserver status = Running (err=<nil>)
	I0722 00:48:59.796925  578685 status.go:257] ha-107153-m03 status: &{Name:ha-107153-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 00:48:59.796971  578685 status.go:255] checking status of ha-107153-m04 ...
	I0722 00:48:59.797343  578685 cli_runner.go:164] Run: docker container inspect ha-107153-m04 --format={{.State.Status}}
	I0722 00:48:59.816611  578685 status.go:330] ha-107153-m04 host status = "Running" (err=<nil>)
	I0722 00:48:59.816633  578685 host.go:66] Checking if "ha-107153-m04" exists ...
	I0722 00:48:59.817070  578685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107153-m04
	I0722 00:48:59.834168  578685 host.go:66] Checking if "ha-107153-m04" exists ...
	I0722 00:48:59.834489  578685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 00:48:59.834542  578685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107153-m04
	I0722 00:48:59.852098  578685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39011 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/ha-107153-m04/id_rsa Username:docker}
	I0722 00:48:59.937930  578685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:48:59.953167  578685 status.go:257] ha-107153-m04 status: &{Name:ha-107153-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-107153 node start m02 -v=7 --alsologtostderr: (29.243161486s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-107153 status -v=7 --alsologtostderr: (1.407666206s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (4.238359577s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-107153 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-107153 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-107153 -v=7 --alsologtostderr: (37.014888978s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-107153 --wait=true -v=7 --alsologtostderr
E0722 00:51:01.800177  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 00:51:08.198078  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
E0722 00:51:29.484423  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-107153 --wait=true -v=7 --alsologtostderr: (2m45.623726035s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-107153
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-107153 node delete m03 -v=7 --alsologtostderr: (11.861692859s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-107153 stop -v=7 --alsologtostderr: (35.690536628s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-107153 status -v=7 --alsologtostderr: exit status 7 (117.202555ms)

                                                
                                                
-- stdout --
	ha-107153
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107153-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107153-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:53:47.679304  593190 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:53:47.679481  593190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:53:47.679491  593190 out.go:304] Setting ErrFile to fd 2...
	I0722 00:53:47.679497  593190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:53:47.679731  593190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 00:53:47.679912  593190 out.go:298] Setting JSON to false
	I0722 00:53:47.679945  593190 mustload.go:65] Loading cluster: ha-107153
	I0722 00:53:47.680034  593190 notify.go:220] Checking for updates...
	I0722 00:53:47.680345  593190 config.go:182] Loaded profile config "ha-107153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:53:47.680355  593190 status.go:255] checking status of ha-107153 ...
	I0722 00:53:47.680868  593190 cli_runner.go:164] Run: docker container inspect ha-107153 --format={{.State.Status}}
	I0722 00:53:47.698669  593190 status.go:330] ha-107153 host status = "Stopped" (err=<nil>)
	I0722 00:53:47.698692  593190 status.go:343] host is not running, skipping remaining checks
	I0722 00:53:47.698700  593190 status.go:257] ha-107153 status: &{Name:ha-107153 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 00:53:47.698730  593190 status.go:255] checking status of ha-107153-m02 ...
	I0722 00:53:47.699036  593190 cli_runner.go:164] Run: docker container inspect ha-107153-m02 --format={{.State.Status}}
	I0722 00:53:47.726294  593190 status.go:330] ha-107153-m02 host status = "Stopped" (err=<nil>)
	I0722 00:53:47.726315  593190 status.go:343] host is not running, skipping remaining checks
	I0722 00:53:47.726322  593190 status.go:257] ha-107153-m02 status: &{Name:ha-107153-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 00:53:47.726343  593190 status.go:255] checking status of ha-107153-m04 ...
	I0722 00:53:47.726687  593190 cli_runner.go:164] Run: docker container inspect ha-107153-m04 --format={{.State.Status}}
	I0722 00:53:47.751740  593190 status.go:330] ha-107153-m04 host status = "Stopped" (err=<nil>)
	I0722 00:53:47.751767  593190 status.go:343] host is not running, skipping remaining checks
	I0722 00:53:47.751774  593190 status.go:257] ha-107153-m04 status: &{Name:ha-107153-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (63.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-107153 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-107153 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m2.12899265s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (63.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-107153 --control-plane -v=7 --alsologtostderr
E0722 00:56:01.799605  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-107153 --control-plane -v=7 --alsologtostderr: (1m15.377578004s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-107153 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0722 00:56:08.198120  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-651424 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-651424 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (59.471579797s)
--- PASS: TestJSONOutput/start/Command (59.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-651424 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-651424 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-651424 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-651424 --output=json --user=testUser: (5.854548788s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-819146 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-819146 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (71.121269ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4de55008-52d1-495b-b1a5-905bd7197b81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-819146] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e46fbbd-ffda-4208-a11c-407926b4ab82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"362f46d6-0012-425f-a467-ee2f77f7b815","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8697507a-ac99-4fa0-97ba-b87d45c46e80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig"}}
	{"specversion":"1.0","id":"c5a540f9-4dee-44b2-989b-559f7e365f42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube"}}
	{"specversion":"1.0","id":"2a28f785-f959-4c49-87b1-2c2d09876ae4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b4300f04-8171-4319-bca0-3de9ac9b86a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"83c701c8-c851-4249-bc33-5ebfdc1ea801","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-819146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-819146
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-924186 --network=
E0722 00:57:31.244668  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-924186 --network=: (37.60267591s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-924186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-924186
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-924186: (2.058183251s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.69s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-454181 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-454181 --network=bridge: (29.950088028s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-454181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-454181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-454181: (1.953281838s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.92s)

                                                
                                    
x
+
TestKicExistingNetwork (34.09s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-865768 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-865768 --network=existing-network: (31.897446235s)
helpers_test.go:175: Cleaning up "existing-network-865768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-865768
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-865768: (2.025983955s)
--- PASS: TestKicExistingNetwork (34.09s)

                                                
                                    
x
+
TestKicCustomSubnet (37.52s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-215645 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-215645 --subnet=192.168.60.0/24: (35.711942889s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-215645 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-215645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-215645
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-215645: (1.78678712s)
--- PASS: TestKicCustomSubnet (37.52s)

                                                
                                    
x
+
TestKicStaticIP (34.92s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-251262 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-251262 --static-ip=192.168.200.200: (32.769118249s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-251262 ip
helpers_test.go:175: Cleaning up "static-ip-251262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-251262
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-251262: (2.014699554s)
--- PASS: TestKicStaticIP (34.92s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-974288 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-974288 --driver=docker  --container-runtime=crio: (34.20810168s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-977012 --driver=docker  --container-runtime=crio
E0722 01:01:01.799480  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 01:01:08.197615  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-977012 --driver=docker  --container-runtime=crio: (34.085604363s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-974288
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-977012
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-977012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-977012
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-977012: (1.954932362s)
helpers_test.go:175: Cleaning up "first-974288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-974288
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-974288: (2.251373521s)
--- PASS: TestMinikubeProfile (73.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-629829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-629829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.985461455s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-629829 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-643679 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-643679 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.040758793s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-643679 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-629829 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-629829 --alsologtostderr -v=5: (1.614032447s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-643679 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-643679
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-643679: (1.203058906s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.17s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-643679
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-643679: (7.17220956s)
--- PASS: TestMountStart/serial/RestartStopped (8.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-643679 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (89.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-163302 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0722 01:02:24.844917  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-163302 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m28.952375705s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (89.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-163302 -- rollout status deployment/busybox: (3.064728983s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- exec busybox-fc5497c4f-2jqzs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- exec busybox-fc5497c4f-m74jb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- exec busybox-fc5497c4f-2jqzs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- exec busybox-fc5497c4f-m74jb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- exec busybox-fc5497c4f-2jqzs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- exec busybox-fc5497c4f-m74jb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- exec busybox-fc5497c4f-2jqzs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- exec busybox-fc5497c4f-2jqzs -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- exec busybox-fc5497c4f-m74jb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-163302 -- exec busybox-fc5497c4f-m74jb -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-163302 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-163302 -v 3 --alsologtostderr: (29.914552152s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.57s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-163302 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp testdata/cp-test.txt multinode-163302:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp multinode-163302:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1669631627/001/cp-test_multinode-163302.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp multinode-163302:/home/docker/cp-test.txt multinode-163302-m02:/home/docker/cp-test_multinode-163302_multinode-163302-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m02 "sudo cat /home/docker/cp-test_multinode-163302_multinode-163302-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp multinode-163302:/home/docker/cp-test.txt multinode-163302-m03:/home/docker/cp-test_multinode-163302_multinode-163302-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m03 "sudo cat /home/docker/cp-test_multinode-163302_multinode-163302-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp testdata/cp-test.txt multinode-163302-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp multinode-163302-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1669631627/001/cp-test_multinode-163302-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp multinode-163302-m02:/home/docker/cp-test.txt multinode-163302:/home/docker/cp-test_multinode-163302-m02_multinode-163302.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302 "sudo cat /home/docker/cp-test_multinode-163302-m02_multinode-163302.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp multinode-163302-m02:/home/docker/cp-test.txt multinode-163302-m03:/home/docker/cp-test_multinode-163302-m02_multinode-163302-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m03 "sudo cat /home/docker/cp-test_multinode-163302-m02_multinode-163302-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp testdata/cp-test.txt multinode-163302-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp multinode-163302-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1669631627/001/cp-test_multinode-163302-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp multinode-163302-m03:/home/docker/cp-test.txt multinode-163302:/home/docker/cp-test_multinode-163302-m03_multinode-163302.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302 "sudo cat /home/docker/cp-test_multinode-163302-m03_multinode-163302.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 cp multinode-163302-m03:/home/docker/cp-test.txt multinode-163302-m02:/home/docker/cp-test_multinode-163302-m03_multinode-163302-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 ssh -n multinode-163302-m02 "sudo cat /home/docker/cp-test_multinode-163302-m03_multinode-163302-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-163302 node stop m03: (1.204834883s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-163302 status: exit status 7 (495.621288ms)

                                                
                                                
-- stdout --
	multinode-163302
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-163302-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-163302-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-163302 status --alsologtostderr: exit status 7 (525.111348ms)

                                                
                                                
-- stdout --
	multinode-163302
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-163302-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-163302-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 01:04:27.733526  646864 out.go:291] Setting OutFile to fd 1 ...
	I0722 01:04:27.733760  646864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:04:27.733788  646864 out.go:304] Setting ErrFile to fd 2...
	I0722 01:04:27.733806  646864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:04:27.734101  646864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 01:04:27.734343  646864 out.go:298] Setting JSON to false
	I0722 01:04:27.734407  646864 mustload.go:65] Loading cluster: multinode-163302
	I0722 01:04:27.734543  646864 notify.go:220] Checking for updates...
	I0722 01:04:27.734919  646864 config.go:182] Loaded profile config "multinode-163302": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 01:04:27.734954  646864 status.go:255] checking status of multinode-163302 ...
	I0722 01:04:27.735628  646864 cli_runner.go:164] Run: docker container inspect multinode-163302 --format={{.State.Status}}
	I0722 01:04:27.755761  646864 status.go:330] multinode-163302 host status = "Running" (err=<nil>)
	I0722 01:04:27.755782  646864 host.go:66] Checking if "multinode-163302" exists ...
	I0722 01:04:27.756080  646864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-163302
	I0722 01:04:27.773813  646864 host.go:66] Checking if "multinode-163302" exists ...
	I0722 01:04:27.774135  646864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 01:04:27.774202  646864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-163302
	I0722 01:04:27.800206  646864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39116 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/multinode-163302/id_rsa Username:docker}
	I0722 01:04:27.893778  646864 ssh_runner.go:195] Run: systemctl --version
	I0722 01:04:27.897927  646864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 01:04:27.909634  646864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 01:04:27.984847  646864 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-07-22 01:04:27.975071544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 01:04:27.985525  646864 kubeconfig.go:125] found "multinode-163302" server: "https://192.168.58.2:8443"
	I0722 01:04:27.985560  646864 api_server.go:166] Checking apiserver status ...
	I0722 01:04:27.985628  646864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 01:04:27.996971  646864 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup
	I0722 01:04:28.008483  646864 api_server.go:182] apiserver freezer: "6:freezer:/docker/6897bd9cdf8ddb772da972e9218624b3e447160e3cb699af84d85d94094a8f36/crio/crio-387eeb9c39bdd60c3d35ef005e2b88eed1ba2e0bba8519b0d19f52cf53c68529"
	I0722 01:04:28.008568  646864 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6897bd9cdf8ddb772da972e9218624b3e447160e3cb699af84d85d94094a8f36/crio/crio-387eeb9c39bdd60c3d35ef005e2b88eed1ba2e0bba8519b0d19f52cf53c68529/freezer.state
	I0722 01:04:28.018478  646864 api_server.go:204] freezer state: "THAWED"
	I0722 01:04:28.018510  646864 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0722 01:04:28.026284  646864 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0722 01:04:28.026314  646864 status.go:422] multinode-163302 apiserver status = Running (err=<nil>)
	I0722 01:04:28.026326  646864 status.go:257] multinode-163302 status: &{Name:multinode-163302 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 01:04:28.026344  646864 status.go:255] checking status of multinode-163302-m02 ...
	I0722 01:04:28.026699  646864 cli_runner.go:164] Run: docker container inspect multinode-163302-m02 --format={{.State.Status}}
	I0722 01:04:28.043507  646864 status.go:330] multinode-163302-m02 host status = "Running" (err=<nil>)
	I0722 01:04:28.043541  646864 host.go:66] Checking if "multinode-163302-m02" exists ...
	I0722 01:04:28.043858  646864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-163302-m02
	I0722 01:04:28.061443  646864 host.go:66] Checking if "multinode-163302-m02" exists ...
	I0722 01:04:28.061765  646864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 01:04:28.061819  646864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-163302-m02
	I0722 01:04:28.085013  646864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39121 SSHKeyPath:/home/jenkins/minikube-integration/19312-526659/.minikube/machines/multinode-163302-m02/id_rsa Username:docker}
	I0722 01:04:28.170014  646864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 01:04:28.181968  646864 status.go:257] multinode-163302-m02 status: &{Name:multinode-163302-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0722 01:04:28.182003  646864 status.go:255] checking status of multinode-163302-m03 ...
	I0722 01:04:28.182342  646864 cli_runner.go:164] Run: docker container inspect multinode-163302-m03 --format={{.State.Status}}
	I0722 01:04:28.203614  646864 status.go:330] multinode-163302-m03 host status = "Stopped" (err=<nil>)
	I0722 01:04:28.203640  646864 status.go:343] host is not running, skipping remaining checks
	I0722 01:04:28.203648  646864 status.go:257] multinode-163302-m03 status: &{Name:multinode-163302-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-163302 node start m03 -v=7 --alsologtostderr: (9.096548628s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-163302
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-163302
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-163302: (24.792382599s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-163302 --wait=true -v=8 --alsologtostderr
E0722 01:06:01.799274  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-163302 --wait=true -v=8 --alsologtostderr: (1m1.478472043s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-163302
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 node delete m03
E0722 01:06:08.197368  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-163302 node delete m03: (4.569179473s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-163302 stop: (23.660308368s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-163302 status: exit status 7 (95.211488ms)

                                                
                                                
-- stdout --
	multinode-163302
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-163302-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-163302 status --alsologtostderr: exit status 7 (91.498671ms)

                                                
                                                
-- stdout --
	multinode-163302
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-163302-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 01:06:33.451923  654329 out.go:291] Setting OutFile to fd 1 ...
	I0722 01:06:33.452095  654329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:06:33.452108  654329 out.go:304] Setting ErrFile to fd 2...
	I0722 01:06:33.452114  654329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:06:33.452385  654329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 01:06:33.452570  654329 out.go:298] Setting JSON to false
	I0722 01:06:33.452598  654329 mustload.go:65] Loading cluster: multinode-163302
	I0722 01:06:33.452695  654329 notify.go:220] Checking for updates...
	I0722 01:06:33.453003  654329 config.go:182] Loaded profile config "multinode-163302": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 01:06:33.453019  654329 status.go:255] checking status of multinode-163302 ...
	I0722 01:06:33.453591  654329 cli_runner.go:164] Run: docker container inspect multinode-163302 --format={{.State.Status}}
	I0722 01:06:33.480842  654329 status.go:330] multinode-163302 host status = "Stopped" (err=<nil>)
	I0722 01:06:33.480865  654329 status.go:343] host is not running, skipping remaining checks
	I0722 01:06:33.480873  654329 status.go:257] multinode-163302 status: &{Name:multinode-163302 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 01:06:33.480904  654329 status.go:255] checking status of multinode-163302-m02 ...
	I0722 01:06:33.481223  654329 cli_runner.go:164] Run: docker container inspect multinode-163302-m02 --format={{.State.Status}}
	I0722 01:06:33.497061  654329 status.go:330] multinode-163302-m02 host status = "Stopped" (err=<nil>)
	I0722 01:06:33.497090  654329 status.go:343] host is not running, skipping remaining checks
	I0722 01:06:33.497097  654329 status.go:257] multinode-163302-m02 status: &{Name:multinode-163302-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-163302 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-163302 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (53.859268681s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-163302 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-163302
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-163302-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-163302-m02 --driver=docker  --container-runtime=crio: exit status 14 (87.530049ms)

                                                
                                                
-- stdout --
	* [multinode-163302-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-163302-m02' is duplicated with machine name 'multinode-163302-m02' in profile 'multinode-163302'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-163302-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-163302-m03 --driver=docker  --container-runtime=crio: (32.784578495s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-163302
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-163302: exit status 80 (316.041667ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-163302 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-163302-m03 already exists in multinode-163302-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-163302-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-163302-m03: (1.942013619s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.18s)

                                                
                                    
x
+
TestPreload (125.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-225429 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-225429 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m35.386906151s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-225429 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-225429 image pull gcr.io/k8s-minikube/busybox: (1.862460404s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-225429
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-225429: (5.771291781s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-225429 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-225429 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.185609927s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-225429 image list
helpers_test.go:175: Cleaning up "test-preload-225429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-225429
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-225429: (2.432568303s)
--- PASS: TestPreload (125.95s)

                                                
                                    
x
+
TestScheduledStopUnix (107.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-484809 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-484809 --memory=2048 --driver=docker  --container-runtime=crio: (31.0284618s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-484809 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-484809 -n scheduled-stop-484809
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-484809 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-484809 --cancel-scheduled
E0722 01:11:01.799179  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 01:11:08.197528  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-484809 -n scheduled-stop-484809
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-484809
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-484809 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-484809
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-484809: exit status 7 (67.665037ms)

                                                
                                                
-- stdout --
	scheduled-stop-484809
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-484809 -n scheduled-stop-484809
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-484809 -n scheduled-stop-484809: exit status 7 (65.157339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-484809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-484809
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-484809: (4.696189501s)
--- PASS: TestScheduledStopUnix (107.31s)

                                                
                                    
x
+
TestInsufficientStorage (10.53s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-292066 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-292066 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.07248374s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1aec3163-def8-47ae-b2c3-aba5942667c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-292066] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb8e8dff-9c33-49a8-b21b-70e85982d545","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"5bdb2611-1a19-4143-9269-7530af55984a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"163b449a-d8d3-4f3d-924d-d0e2bb919733","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig"}}
	{"specversion":"1.0","id":"125ac2ed-f135-47ad-b735-45ca1aa8395f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube"}}
	{"specversion":"1.0","id":"2b5eff13-ef50-426f-a36a-b4c10dc9ee4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1c9a3b5c-7f4e-48e0-8202-88fc367cf646","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"149fadc8-4516-4418-b4a3-84952101d209","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c03b1322-c9e2-432c-ad96-3fd637c97966","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b0d36a32-a326-46b8-b37e-a96c0829323f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0a383d2-78c5-45fc-8f60-b5fbe937492a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3fce5df1-b952-4853-8369-42dc6537417e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-292066\" primary control-plane node in \"insufficient-storage-292066\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc81ceaf-eac0-4bbc-ae79-bcacca113e5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721324606-19298 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a3026df8-e726-4f9c-9e15-6ae16a36c1f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"83b8068e-d455-4ba7-8b07-6897ee2324c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-292066 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-292066 --output=json --layout=cluster: exit status 7 (282.762972ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-292066","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-292066","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 01:12:08.927022  672108 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-292066" does not appear in /home/jenkins/minikube-integration/19312-526659/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-292066 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-292066 --output=json --layout=cluster: exit status 7 (267.63352ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-292066","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-292066","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 01:12:09.197252  672169 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-292066" does not appear in /home/jenkins/minikube-integration/19312-526659/kubeconfig
	E0722 01:12:09.207378  672169 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/insufficient-storage-292066/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-292066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-292066
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-292066: (1.901685004s)
--- PASS: TestInsufficientStorage (10.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.919193437 start -p running-upgrade-951695 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.919193437 start -p running-upgrade-951695 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.879029654s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-951695 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-951695 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.419874763s)
helpers_test.go:175: Cleaning up "running-upgrade-951695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-951695
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-951695: (2.786698546s)
--- PASS: TestRunningBinaryUpgrade (79.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (390.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-692569 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0722 01:14:11.245677  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-692569 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.959440996s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-692569
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-692569: (1.248855477s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-692569 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-692569 status --format={{.Host}}: exit status 7 (97.409926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-692569 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-692569 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.062127542s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-692569 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-692569 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-692569 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (94.273633ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-692569] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-692569
	    minikube start -p kubernetes-upgrade-692569 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6925692 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-692569 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-692569 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-692569 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.404805735s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-692569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-692569
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-692569: (2.484080422s)
--- PASS: TestKubernetesUpgrade (390.48s)

                                                
                                    
x
+
TestMissingContainerUpgrade (135.33s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.956153475 start -p missing-upgrade-956442 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.956153475 start -p missing-upgrade-956442 --memory=2200 --driver=docker  --container-runtime=crio: (1m8.964194493s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-956442
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-956442: (1.81949638s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-956442
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-956442 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-956442 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m1.547281944s)
helpers_test.go:175: Cleaning up "missing-upgrade-956442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-956442
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-956442: (2.290615242s)
--- PASS: TestMissingContainerUpgrade (135.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-180059 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-180059 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (79.978159ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-180059] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-180059 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-180059 --driver=docker  --container-runtime=crio: (40.453805722s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-180059 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-180059 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-180059 --no-kubernetes --driver=docker  --container-runtime=crio: (16.713261226s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-180059 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-180059 status -o json: exit status 2 (421.553657ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-180059","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-180059
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-180059: (2.10071576s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-180059 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-180059 --no-kubernetes --driver=docker  --container-runtime=crio: (10.195755602s)
--- PASS: TestNoKubernetes/serial/Start (10.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-180059 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-180059 "sudo systemctl is-active --quiet service kubelet": exit status 1 (250.651594ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-180059
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-180059: (1.230540706s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-180059 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-180059 --driver=docker  --container-runtime=crio: (6.73710818s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-180059 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-180059 "sudo systemctl is-active --quiet service kubelet": exit status 1 (253.700141ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2941458998 start -p stopped-upgrade-365167 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2941458998 start -p stopped-upgrade-365167 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.948904208s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2941458998 -p stopped-upgrade-365167 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2941458998 -p stopped-upgrade-365167 stop: (2.700667232s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-365167 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0722 01:16:01.799228  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 01:16:08.198049  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-365167 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m15.151919585s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-365167
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-365167: (1.155133653s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestPause/serial/Start (62.77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-510552 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-510552 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m2.766934687s)
--- PASS: TestPause/serial/Start (62.77s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.14s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-510552 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0722 01:19:04.845741  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-510552 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.117554887s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.14s)

                                                
                                    
x
+
TestPause/serial/Pause (1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-510552 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (1.00s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-510552 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-510552 --output=json --layout=cluster: exit status 2 (338.969245ms)

                                                
                                                
-- stdout --
	{"Name":"pause-510552","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-510552","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-510552 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.32s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-510552 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-510552 --alsologtostderr -v=5: (1.324494583s)
--- PASS: TestPause/serial/PauseAgain (1.32s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.44s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-510552 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-510552 --alsologtostderr -v=5: (3.437315086s)
--- PASS: TestPause/serial/DeletePaused (3.44s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (12.9s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (12.848607663s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-510552
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-510552: exit status 1 (14.368839ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-510552: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (12.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-024040 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-024040 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (316.164937ms)

                                                
                                                
-- stdout --
	* [false-024040] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 01:20:08.810231  712212 out.go:291] Setting OutFile to fd 1 ...
	I0722 01:20:08.810482  712212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:20:08.810511  712212 out.go:304] Setting ErrFile to fd 2...
	I0722 01:20:08.810529  712212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:20:08.810830  712212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-526659/.minikube/bin
	I0722 01:20:08.811286  712212 out.go:298] Setting JSON to false
	I0722 01:20:08.813018  712212 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":118960,"bootTime":1721492249,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0722 01:20:08.813115  712212 start.go:139] virtualization:  
	I0722 01:20:08.816846  712212 out.go:177] * [false-024040] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0722 01:20:08.819690  712212 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 01:20:08.819756  712212 notify.go:220] Checking for updates...
	I0722 01:20:08.825991  712212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 01:20:08.829606  712212 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-526659/kubeconfig
	I0722 01:20:08.832289  712212 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-526659/.minikube
	I0722 01:20:08.835065  712212 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0722 01:20:08.837684  712212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 01:20:08.840989  712212 config.go:182] Loaded profile config "force-systemd-flag-975128": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 01:20:08.841169  712212 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 01:20:08.896348  712212 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0722 01:20:08.896455  712212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0722 01:20:09.046192  712212 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-22 01:20:09.031473209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0722 01:20:09.046311  712212 docker.go:307] overlay module found
	I0722 01:20:09.049440  712212 out.go:177] * Using the docker driver based on user configuration
	I0722 01:20:09.052012  712212 start.go:297] selected driver: docker
	I0722 01:20:09.052041  712212 start.go:901] validating driver "docker" against <nil>
	I0722 01:20:09.052056  712212 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 01:20:09.055284  712212 out.go:177] 
	W0722 01:20:09.057902  712212 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0722 01:20:09.060945  712212 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-024040 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-024040" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-024040" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19312-526659/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Jul 2024 01:20:10 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-975128
contexts:
- context:
cluster: force-systemd-flag-975128
extensions:
- extension:
last-update: Mon, 22 Jul 2024 01:20:10 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: force-systemd-flag-975128
name: force-systemd-flag-975128
current-context: force-systemd-flag-975128
kind: Config
preferences: {}
users:
- name: force-systemd-flag-975128
user:
client-certificate: /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/force-systemd-flag-975128/client.crt
client-key: /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/force-systemd-flag-975128/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-024040

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024040"

                                                
                                                
----------------------- debugLogs end: false-024040 [took: 4.511515653s] --------------------------------
helpers_test.go:175: Cleaning up "false-024040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-024040
--- PASS: TestNetworkPlugins/group/false (4.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (163.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-127703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-127703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m43.009068251s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (163.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-127703 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fe422f29-b67c-4c84-abc0-682c1b36f189] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fe422f29-b67c-4c84-abc0-682c1b36f189] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00532617s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-127703 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-127703 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-127703 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-127703 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-127703 --alsologtostderr -v=3: (12.007283316s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-127703 -n old-k8s-version-127703
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-127703 -n old-k8s-version-127703: exit status 7 (66.945551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-127703 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (127.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-127703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-127703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m7.409463991s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-127703 -n old-k8s-version-127703
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (127.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-910580 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0722 01:26:01.799511  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 01:26:08.198086  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-910580 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m16.882330168s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-910580 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [74f9319a-4dc3-445d-8dfa-13ccc1ea64be] Pending
helpers_test.go:344: "busybox" [74f9319a-4dc3-445d-8dfa-13ccc1ea64be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [74f9319a-4dc3-445d-8dfa-13ccc1ea64be] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004277644s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-910580 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-910580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-910580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.020674945s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-910580 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-910580 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-910580 --alsologtostderr -v=3: (12.021295385s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-910580 -n no-preload-910580
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-910580 -n no-preload-910580: exit status 7 (119.019189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-910580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-910580 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-910580 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (4m26.079628406s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-910580 -n no-preload-910580
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wpvgh" [37fb5d20-5048-4fc1-aade-15ea265f918c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003913059s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wpvgh" [37fb5d20-5048-4fc1-aade-15ea265f918c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004771859s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-127703 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-127703 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-127703 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-127703 -n old-k8s-version-127703
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-127703 -n old-k8s-version-127703: exit status 2 (330.85085ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-127703 -n old-k8s-version-127703
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-127703 -n old-k8s-version-127703: exit status 2 (356.9806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-127703 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-127703 --alsologtostderr -v=1: (1.075975351s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-127703 -n old-k8s-version-127703
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-127703 -n old-k8s-version-127703
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-116944 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-116944 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (1m2.157618379s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-116944 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4ed6ebc4-c08c-45e2-a520-028d248c1742] Pending
helpers_test.go:344: "busybox" [4ed6ebc4-c08c-45e2-a520-028d248c1742] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4ed6ebc4-c08c-45e2-a520-028d248c1742] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003881351s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-116944 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-116944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-116944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.004045008s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-116944 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-116944 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-116944 --alsologtostderr -v=3: (11.966179128s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-116944 -n embed-certs-116944
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-116944 -n embed-certs-116944: exit status 7 (76.452645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-116944 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-116944 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0722 01:29:16.227575  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:16.232844  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:16.243095  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:16.263373  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:16.303625  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:16.383957  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:16.544202  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:16.864795  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:17.505811  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:18.786024  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:21.346367  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:26.467511  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:36.708001  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:29:57.188152  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:30:38.148487  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
E0722 01:30:51.246533  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
E0722 01:31:01.799183  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-116944 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m27.698790833s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-116944 -n embed-certs-116944
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-ftm7b" [8128a00f-e39e-415a-a1db-70cb7bc66662] Running
E0722 01:31:08.197547  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004093917s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-ftm7b" [8128a00f-e39e-415a-a1db-70cb7bc66662] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004442909s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-910580 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-910580 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-910580 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-910580 -n no-preload-910580
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-910580 -n no-preload-910580: exit status 2 (314.821248ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-910580 -n no-preload-910580
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-910580 -n no-preload-910580: exit status 2 (324.35989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-910580 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-910580 -n no-preload-910580
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-910580 -n no-preload-910580
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-346637 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0722 01:32:00.069425  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-346637 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (1m0.134376303s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-346637 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bdbbf047-0344-41c0-80f5-9086f15f623c] Pending
helpers_test.go:344: "busybox" [bdbbf047-0344-41c0-80f5-9086f15f623c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bdbbf047-0344-41c0-80f5-9086f15f623c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003480796s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-346637 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-346637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-346637 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-346637 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-346637 --alsologtostderr -v=3: (11.967459531s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-346637 -n default-k8s-diff-port-346637
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-346637 -n default-k8s-diff-port-346637: exit status 7 (94.809624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-346637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-346637 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-346637 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m26.562802987s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-346637 -n default-k8s-diff-port-346637
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-22szh" [e1bd1849-6ebc-4558-8296-687f0f035824] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004258415s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-22szh" [e1bd1849-6ebc-4558-8296-687f0f035824] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004501558s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-116944 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-116944 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-116944 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-116944 -n embed-certs-116944
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-116944 -n embed-certs-116944: exit status 2 (314.988019ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-116944 -n embed-certs-116944
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-116944 -n embed-certs-116944: exit status 2 (306.58122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-116944 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-116944 -n embed-certs-116944
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-116944 -n embed-certs-116944
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-162380 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-162380 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (41.948281347s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-162380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-162380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.413738375s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-162380 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-162380 --alsologtostderr -v=3: (1.415714736s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-162380 -n newest-cni-162380
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-162380 -n newest-cni-162380: exit status 7 (92.032082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-162380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-162380 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0722 01:34:16.228207  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-162380 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (15.87426506s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-162380 -n newest-cni-162380
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-162380 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-162380 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-162380 -n newest-cni-162380
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-162380 -n newest-cni-162380: exit status 2 (320.956084ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-162380 -n newest-cni-162380
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-162380 -n newest-cni-162380: exit status 2 (320.767143ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-162380 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-162380 -n newest-cni-162380
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-162380 -n newest-cni-162380
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (64.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0722 01:34:43.909636  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/old-k8s-version-127703/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m4.288617981s)
--- PASS: TestNetworkPlugins/group/auto/Start (64.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-024040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-024040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-v4pmj" [a76563e3-cb80-4e23-81ca-e8c672330bb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-v4pmj" [a76563e3-cb80-4e23-81ca-e8c672330bb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.003939323s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-024040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0722 01:36:01.799663  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
E0722 01:36:08.197520  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/addons-783853/client.crt: no such file or directory
E0722 01:36:17.596376  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:17.602295  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:17.612479  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:17.632779  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:17.673015  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:17.754137  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:17.914498  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:18.235514  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:18.875699  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:20.155890  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:22.716111  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:27.836561  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:38.076857  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:36:58.557145  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m2.384907643s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-r8294" [2201eb65-150c-4f1e-bbc6-3896b51d468a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004026194s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-024040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-024040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tzcrd" [87c08766-e124-4418-9900-098ea27e6941] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tzcrd" [87c08766-e124-4418-9900-098ea27e6941] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003618731s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-g88kq" [26dcfffa-e423-4f13-a577-6d0ff4a9f093] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003356519s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-g88kq" [26dcfffa-e423-4f13-a577-6d0ff4a9f093] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003975269s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-346637 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-024040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-346637 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-346637 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-346637 -n default-k8s-diff-port-346637
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-346637 -n default-k8s-diff-port-346637: exit status 2 (320.035521ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-346637 -n default-k8s-diff-port-346637
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-346637 -n default-k8s-diff-port-346637: exit status 2 (324.536825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-346637 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-346637 -n default-k8s-diff-port-346637
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-346637 -n default-k8s-diff-port-346637
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.57s)
E0722 01:41:45.279600  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
E0722 01:41:49.213493  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
E0722 01:42:03.785995  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:03.791401  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:03.801753  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:03.822142  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:03.862454  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:03.942820  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:04.103243  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:04.423835  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:05.064755  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:06.345308  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:08.905544  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:14.026073  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:24.266689  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
E0722 01:42:25.462877  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory
E0722 01:42:25.468197  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory
E0722 01:42:25.478552  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory
E0722 01:42:25.498840  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory
E0722 01:42:25.539207  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory
E0722 01:42:25.619477  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory
E0722 01:42:25.779833  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory
E0722 01:42:26.100388  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory
E0722 01:42:26.740656  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory
E0722 01:42:28.021527  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory
E0722 01:42:30.582624  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0722 01:37:39.518259  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m14.739503693s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m16.125243456s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tffsl" [0b5bd763-33fa-4eff-b6c4-5f2d8efe25c3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005328759s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-024040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-024040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lpf2x" [d2ca97df-eeae-4024-b1a7-32950493f6a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lpf2x" [d2ca97df-eeae-4024-b1a7-32950493f6a6] Running
E0722 01:39:01.439389  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/no-preload-910580/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004738957s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-024040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-024040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8xjqb" [c1ab083d-bfcb-4445-aa87-36d6c88fc448] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-8xjqb" [c1ab083d-bfcb-4445-aa87-36d6c88fc448] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004678827s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-024040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-024040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m2.491714883s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0722 01:40:27.291019  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
E0722 01:40:27.296270  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
E0722 01:40:27.306527  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
E0722 01:40:27.326767  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
E0722 01:40:27.367050  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
E0722 01:40:27.447310  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
E0722 01:40:27.607876  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
E0722 01:40:27.928959  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
E0722 01:40:28.569301  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
E0722 01:40:29.849968  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.254573241s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-024040 "pgrep -a kubelet"
E0722 01:40:32.410360  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-024040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bcdct" [a599914f-3840-4424-8ea2-bab9097ce227] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0722 01:40:37.531021  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-bcdct" [a599914f-3840-4424-8ea2-bab9097ce227] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003984843s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-024040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-m7nqw" [1750827c-4e6b-42b0-8277-dc0c14583a9e] Running
E0722 01:40:47.772249  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/auto-024040/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005648293s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-024040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-024040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kql9q" [998952f3-2806-452e-af2d-b9ca82c1e8ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kql9q" [998952f3-2806-452e-af2d-b9ca82c1e8ac] Running
E0722 01:41:01.799352  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/functional-464385/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004656445s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-024040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m29.387796623s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-024040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-024040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-024040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-c8kjh" [3a0f4562-658b-4ec0-883c-5c74ad910e2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0722 01:42:35.703249  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/default-k8s-diff-port-346637/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-c8kjh" [3a0f4562-658b-4ec0-883c-5c74ad910e2a] Running
E0722 01:42:44.747719  532157 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-526659/.minikube/profiles/kindnet-024040/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004751778s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-024040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-024040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (33/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.51s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-688994 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-688994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-688994
--- SKIP: TestDownloadOnlyKic (0.51s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-165736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-165736
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-024040 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-024040" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-024040" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-024040

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024040"

                                                
                                                
----------------------- debugLogs end: kubenet-024040 [took: 5.264419675s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-024040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-024040
--- SKIP: TestNetworkPlugins/group/kubenet (5.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-024040 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-024040" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-024040

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-024040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024040"

                                                
                                                
----------------------- debugLogs end: cilium-024040 [took: 4.716747501s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-024040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-024040
--- SKIP: TestNetworkPlugins/group/cilium (4.88s)

                                                
                                    
Copied to clipboard